Apr 13 19:23:14.322651 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 13 19:23:14.322705 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:23:14.322775 kernel: KASLR disabled due to lack of seed Apr 13 19:23:14.322818 kernel: efi: EFI v2.7 by EDK II Apr 13 19:23:14.322836 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 13 19:23:14.322853 kernel: ACPI: Early table checksum verification disabled Apr 13 19:23:14.322871 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 13 19:23:14.322888 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 19:23:14.322905 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 19:23:14.322921 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 19:23:14.322946 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 19:23:14.322963 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 13 19:23:14.322979 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 13 19:23:14.322996 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 13 19:23:14.323015 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 19:23:14.323037 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 13 19:23:14.323056 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 13 19:23:14.323073 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 13 19:23:14.323091 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 13 19:23:14.323108 kernel: printk: bootconsole [uart0] enabled Apr 13 19:23:14.323125 kernel: NUMA: Failed to initialise from firmware Apr 13 19:23:14.323143 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:14.323160 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 13 19:23:14.323177 kernel: Zone ranges: Apr 13 19:23:14.323194 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:23:14.323212 kernel: DMA32 empty Apr 13 19:23:14.323234 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 13 19:23:14.323252 kernel: Movable zone start for each node Apr 13 19:23:14.323269 kernel: Early memory node ranges Apr 13 19:23:14.323286 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 13 19:23:14.323303 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 13 19:23:14.323320 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 13 19:23:14.323337 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 13 19:23:14.323354 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 13 19:23:14.323372 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 13 19:23:14.323388 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 13 19:23:14.323405 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 13 19:23:14.323423 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:14.323444 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 13 19:23:14.323462 kernel: psci: probing for conduit method from ACPI. Apr 13 19:23:14.323487 kernel: psci: PSCIv1.0 detected in firmware. Apr 13 19:23:14.323506 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:23:14.323525 kernel: psci: Trusted OS migration not required Apr 13 19:23:14.323549 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:23:14.323568 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 13 19:23:14.323586 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:23:14.323604 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:23:14.323623 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:23:14.323642 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:23:14.323660 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:23:14.323678 kernel: CPU features: detected: Spectre-v2 Apr 13 19:23:14.323696 kernel: CPU features: detected: Spectre-v3a Apr 13 19:23:14.323714 kernel: CPU features: detected: Spectre-BHB Apr 13 19:23:14.325646 kernel: CPU features: detected: ARM erratum 1742098 Apr 13 19:23:14.325702 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 13 19:23:14.325725 kernel: alternatives: applying boot alternatives Apr 13 19:23:14.325783 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:14.325805 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:23:14.325825 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:23:14.325844 kernel: Fallback order for Node 0: 0 Apr 13 19:23:14.325862 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 13 19:23:14.325882 kernel: Policy zone: Normal Apr 13 19:23:14.325901 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:23:14.325949 kernel: software IO TLB: area num 2. Apr 13 19:23:14.325975 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 13 19:23:14.326010 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 13 19:23:14.326030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:23:14.326051 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:23:14.326072 kernel: rcu: RCU event tracing is enabled. Apr 13 19:23:14.326093 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:23:14.326114 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:23:14.326134 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:23:14.326154 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:23:14.326174 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:23:14.326193 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:23:14.326212 kernel: GICv3: 96 SPIs implemented Apr 13 19:23:14.326239 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:23:14.326259 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:23:14.326278 kernel: GICv3: GICv3 features: 16 PPIs Apr 13 19:23:14.326297 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 13 19:23:14.326315 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 13 19:23:14.326334 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:23:14.326354 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:23:14.326373 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 13 19:23:14.326391 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 13 19:23:14.326411 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 13 19:23:14.326430 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:23:14.326449 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 13 19:23:14.326475 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 13 19:23:14.326494 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 13 19:23:14.326514 kernel: Console: colour dummy device 80x25 Apr 13 19:23:14.326534 kernel: printk: console [tty1] enabled Apr 13 19:23:14.326554 kernel: ACPI: Core revision 20230628 Apr 13 19:23:14.326574 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 13 19:23:14.326593 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:23:14.326613 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:23:14.326632 kernel: landlock: Up and running. Apr 13 19:23:14.326658 kernel: SELinux: Initializing. Apr 13 19:23:14.326677 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:14.326697 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:14.326716 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:14.326813 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:14.326841 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:23:14.326863 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:23:14.326883 kernel: Platform MSI: ITS@0x10080000 domain created Apr 13 19:23:14.326902 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 13 19:23:14.326930 kernel: Remapping and enabling EFI services. Apr 13 19:23:14.326950 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:23:14.326968 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:23:14.326988 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 13 19:23:14.327007 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 13 19:23:14.327025 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 13 19:23:14.327045 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:23:14.327063 kernel: SMP: Total of 2 processors activated. Apr 13 19:23:14.327082 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:23:14.327107 kernel: CPU features: detected: 32-bit EL1 Support Apr 13 19:23:14.327126 kernel: CPU features: detected: CRC32 instructions Apr 13 19:23:14.327146 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:23:14.327176 kernel: alternatives: applying system-wide alternatives Apr 13 19:23:14.327201 kernel: devtmpfs: initialized Apr 13 19:23:14.327220 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:23:14.327239 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:23:14.327258 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:23:14.327277 kernel: SMBIOS 3.0.0 present. Apr 13 19:23:14.327301 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 13 19:23:14.327321 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:23:14.327340 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:23:14.327359 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:23:14.327379 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:23:14.327398 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:23:14.327417 kernel: audit: type=2000 audit(0.318:1): state=initialized audit_enabled=0 res=1 Apr 13 19:23:14.327437 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:23:14.327461 kernel: cpuidle: using governor menu Apr 13 19:23:14.327481 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:23:14.327501 kernel: ASID allocator initialised with 65536 entries Apr 13 19:23:14.327520 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:23:14.327540 kernel: Serial: AMBA PL011 UART driver Apr 13 19:23:14.327559 kernel: Modules: 17488 pages in range for non-PLT usage Apr 13 19:23:14.327579 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:23:14.327598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:23:14.327617 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:23:14.327643 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:23:14.327663 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:23:14.327682 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:23:14.327701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:23:14.327721 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:23:14.327815 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:23:14.327844 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:23:14.327865 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:23:14.327885 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:23:14.327918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:23:14.327938 kernel: ACPI: Interpreter enabled Apr 13 19:23:14.327958 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:23:14.327977 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:23:14.327997 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 13 19:23:14.328340 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:23:14.328688 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:23:14.329037 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:23:14.329291 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 13 19:23:14.329524 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 13 19:23:14.329552 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 13 19:23:14.329572 kernel: acpiphp: Slot [1] registered Apr 13 19:23:14.329591 kernel: acpiphp: Slot [2] registered Apr 13 19:23:14.329610 kernel: acpiphp: Slot [3] registered Apr 13 19:23:14.329630 kernel: acpiphp: Slot [4] registered Apr 13 19:23:14.329649 kernel: acpiphp: Slot [5] registered Apr 13 19:23:14.329682 kernel: acpiphp: Slot [6] registered Apr 13 19:23:14.329702 kernel: acpiphp: Slot [7] registered Apr 13 19:23:14.329722 kernel: acpiphp: Slot [8] registered Apr 13 19:23:14.331598 kernel: acpiphp: Slot [9] registered Apr 13 19:23:14.331623 kernel: acpiphp: Slot [10] registered Apr 13 19:23:14.331644 kernel: acpiphp: Slot [11] registered Apr 13 19:23:14.331665 kernel: acpiphp: Slot [12] registered Apr 13 19:23:14.331686 kernel: acpiphp: Slot [13] registered Apr 13 19:23:14.331708 kernel: acpiphp: Slot [14] registered Apr 13 19:23:14.331773 kernel: acpiphp: Slot [15] registered Apr 13 19:23:14.331814 kernel: acpiphp: Slot [16] registered Apr 13 19:23:14.331835 kernel: acpiphp: Slot [17] registered Apr 13 19:23:14.331855 kernel: acpiphp: Slot [18] registered Apr 13 19:23:14.331875 kernel: acpiphp: Slot [19] registered Apr 13 19:23:14.331897 kernel: acpiphp: Slot [20] registered Apr 13 19:23:14.331917 kernel: acpiphp: Slot [21] registered Apr 13 19:23:14.331937 kernel: acpiphp: Slot [22] registered Apr 13 19:23:14.331957 kernel: acpiphp: Slot [23] registered Apr 13 19:23:14.331977 kernel: acpiphp: Slot [24] registered Apr 13 19:23:14.332004 kernel: acpiphp: Slot [25] registered Apr 13 19:23:14.332024 kernel: acpiphp: Slot [26] registered Apr 13 19:23:14.332044 kernel: acpiphp: Slot [27] registered Apr 13 19:23:14.332064 kernel: acpiphp: Slot [28] registered Apr 13 19:23:14.332083 kernel: acpiphp: Slot [29] registered Apr 13 19:23:14.332103 kernel: acpiphp: Slot [30] registered Apr 13 19:23:14.332122 kernel: acpiphp: Slot [31] registered Apr 13 19:23:14.332142 kernel: PCI host bridge to bus 0000:00 Apr 13 19:23:14.332473 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 13 19:23:14.332710 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:23:14.332989 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:14.333208 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 13 19:23:14.333503 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 13 19:23:14.333878 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 13 19:23:14.334176 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 13 19:23:14.334458 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 19:23:14.337252 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 13 19:23:14.337513 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:14.337827 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 19:23:14.338072 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 13 19:23:14.338321 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 13 19:23:14.338577 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 13 19:23:14.338973 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:14.339236 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 13 19:23:14.339464 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:23:14.339700 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:14.339785 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:23:14.339810 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:23:14.339831 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:23:14.339851 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:23:14.339882 kernel: iommu: Default domain type: Translated Apr 13 19:23:14.339904 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:23:14.339923 kernel: efivars: Registered efivars operations Apr 13 19:23:14.339942 kernel: vgaarb: loaded Apr 13 19:23:14.339964 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:23:14.339984 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:23:14.340003 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:23:14.340024 kernel: pnp: PnP ACPI init Apr 13 19:23:14.340305 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 13 19:23:14.340360 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:23:14.340381 kernel: NET: Registered PF_INET protocol family Apr 13 19:23:14.340402 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:23:14.340424 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:23:14.340444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:23:14.340464 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:23:14.340484 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:23:14.340504 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:23:14.340533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:14.340553 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:14.340573 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:23:14.340593 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:23:14.340612 kernel: kvm [1]: HYP mode not available Apr 13 19:23:14.340633 kernel: Initialise system trusted keyrings Apr 13 19:23:14.340652 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:23:14.340672 kernel: Key type asymmetric registered Apr 13 19:23:14.340691 kernel: Asymmetric key parser 'x509' registered Apr 13 19:23:14.340717 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:23:14.340811 kernel: io scheduler mq-deadline registered Apr 13 19:23:14.340837 kernel: io scheduler kyber registered Apr 13 19:23:14.340856 kernel: io scheduler bfq registered Apr 13 19:23:14.341287 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 13 19:23:14.341345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:23:14.341368 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:23:14.341389 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 13 19:23:14.341422 kernel: ACPI: button: Sleep Button [SLPB] Apr 13 19:23:14.341443 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:23:14.341465 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:23:14.343261 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 13 19:23:14.343312 kernel: printk: console [ttyS0] disabled Apr 13 19:23:14.343334 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 13 19:23:14.343354 kernel: printk: console [ttyS0] enabled Apr 13 19:23:14.343374 kernel: printk: bootconsole [uart0] disabled Apr 13 19:23:14.343393 kernel: thunder_xcv, ver 1.0 Apr 13 19:23:14.343412 kernel: thunder_bgx, ver 1.0 Apr 13 19:23:14.343441 kernel: nicpf, ver 1.0 Apr 13 19:23:14.343460 kernel: nicvf, ver 1.0 Apr 13 19:23:14.345880 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:23:14.346196 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:23:13 UTC (1776108193) Apr 13 19:23:14.346235 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:23:14.346256 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 13 19:23:14.346276 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:23:14.346307 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:23:14.346328 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:23:14.346348 kernel: Segment Routing with IPv6 Apr 13 19:23:14.346368 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:23:14.346388 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:23:14.346411 kernel: Key type dns_resolver registered Apr 13 19:23:14.346433 kernel: registered taskstats version 1 Apr 13 19:23:14.346453 kernel: Loading compiled-in X.509 certificates Apr 13 19:23:14.346473 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:23:14.346497 kernel: Key type .fscrypt registered Apr 13 19:23:14.346526 kernel: Key type fscrypt-provisioning registered Apr 13 19:23:14.346546 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:23:14.346567 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:23:14.346587 kernel: ima: No architecture policies found Apr 13 19:23:14.346607 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:23:14.346627 kernel: clk: Disabling unused clocks Apr 13 19:23:14.346647 kernel: Freeing unused kernel memory: 39424K Apr 13 19:23:14.346667 kernel: Run /init as init process Apr 13 19:23:14.346688 kernel: with arguments: Apr 13 19:23:14.346714 kernel: /init Apr 13 19:23:14.346807 kernel: with environment: Apr 13 19:23:14.346833 kernel: HOME=/ Apr 13 19:23:14.346854 kernel: TERM=linux Apr 13 19:23:14.346880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:14.346905 systemd[1]: Detected virtualization amazon. Apr 13 19:23:14.346929 systemd[1]: Detected architecture arm64. Apr 13 19:23:14.346962 systemd[1]: Running in initrd. Apr 13 19:23:14.346985 systemd[1]: No hostname configured, using default hostname. Apr 13 19:23:14.347006 systemd[1]: Hostname set to . Apr 13 19:23:14.347028 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:14.347049 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:23:14.347070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:14.347093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:14.347118 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:23:14.347145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:14.347167 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:23:14.347189 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:23:14.347216 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:23:14.347239 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:23:14.347260 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:14.347282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:14.347310 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:14.347333 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:14.347354 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:14.347375 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:14.347398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:14.347420 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:14.347442 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:23:14.347463 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:23:14.347485 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:14.347511 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:14.347533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:14.347554 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:14.347575 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:23:14.347597 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:14.347617 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:23:14.347638 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:23:14.347659 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:14.347685 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:14.347707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:14.347784 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:14.347820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:14.347842 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:23:14.347865 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:23:14.347950 systemd-journald[252]: Collecting audit messages is disabled. Apr 13 19:23:14.347997 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:23:14.348020 kernel: Bridge firewalling registered Apr 13 19:23:14.348050 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:14.348072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:14.348095 systemd-journald[252]: Journal started Apr 13 19:23:14.348135 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2c1dbf012d6d4b183045b5b3011f17) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:14.283453 systemd-modules-load[253]: Inserted module 'overlay' Apr 13 19:23:14.326806 systemd-modules-load[253]: Inserted module 'br_netfilter' Apr 13 19:23:14.357810 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:14.362143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:14.379162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:14.396530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:14.399562 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:23:14.439241 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:14.452934 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:14.462934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:14.469509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:14.483164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:23:14.504548 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:14.511814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:14.544948 dracut-cmdline[286]: dracut-dracut-053 Apr 13 19:23:14.553522 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:14.596326 systemd-resolved[287]: Positive Trust Anchors: Apr 13 19:23:14.599023 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:14.600468 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:14.755787 kernel: SCSI subsystem initialized Apr 13 19:23:14.763777 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:23:14.777766 kernel: iscsi: registered transport (tcp) Apr 13 19:23:14.802797 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:23:14.802879 kernel: QLogic iSCSI HBA Driver Apr 13 19:23:14.864802 kernel: random: crng init done Apr 13 19:23:14.865181 systemd-resolved[287]: Defaulting to hostname 'linux'. Apr 13 19:23:14.869842 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:14.872934 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:14.908846 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:14.927147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:23:14.961969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:23:14.962052 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:23:14.962098 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:23:15.034816 kernel: raid6: neonx8 gen() 6530 MB/s Apr 13 19:23:15.052798 kernel: raid6: neonx4 gen() 6360 MB/s Apr 13 19:23:15.070811 kernel: raid6: neonx2 gen() 5321 MB/s Apr 13 19:23:15.088785 kernel: raid6: neonx1 gen() 3855 MB/s Apr 13 19:23:15.105811 kernel: raid6: int64x8 gen() 3730 MB/s Apr 13 19:23:15.122817 kernel: raid6: int64x4 gen() 3632 MB/s Apr 13 19:23:15.139804 kernel: raid6: int64x2 gen() 3473 MB/s Apr 13 19:23:15.157949 kernel: raid6: int64x1 gen() 2707 MB/s Apr 13 19:23:15.158042 kernel: raid6: using algorithm neonx8 gen() 6530 MB/s Apr 13 19:23:15.176907 kernel: raid6: .... xor() 4846 MB/s, rmw enabled Apr 13 19:23:15.176989 kernel: raid6: using neon recovery algorithm Apr 13 19:23:15.187292 kernel: xor: measuring software checksum speed Apr 13 19:23:15.187371 kernel: 8regs : 10979 MB/sec Apr 13 19:23:15.188793 kernel: 32regs : 11915 MB/sec Apr 13 19:23:15.190276 kernel: arm64_neon : 9337 MB/sec Apr 13 19:23:15.190341 kernel: xor: using function: 32regs (11915 MB/sec) Apr 13 19:23:15.282824 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:23:15.307327 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:15.322202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:15.367929 systemd-udevd[470]: Using default interface naming scheme 'v255'. Apr 13 19:23:15.377056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:15.396159 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:23:15.432459 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Apr 13 19:23:15.501085 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:15.513123 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:15.649968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:15.666456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:23:15.734846 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:15.738596 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:15.745414 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:15.748332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:15.765138 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:23:15.817205 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:15.873239 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:23:15.873311 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 13 19:23:15.883202 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 19:23:15.883605 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 19:23:15.898789 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:cb:bd:bf:cc:63 Apr 13 19:23:15.902412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:15.902677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:15.905390 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:15.935358 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:15.940951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:15.968226 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:23:15.941324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:15.947195 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:15.977969 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 19:23:15.980295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:15.995767 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 19:23:16.009089 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:23:16.009173 kernel: GPT:9289727 != 33554431 Apr 13 19:23:16.009201 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:23:16.011782 kernel: GPT:9289727 != 33554431 Apr 13 19:23:16.013005 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:23:16.016361 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:16.030869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:16.043154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:16.096103 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:16.129365 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (525) Apr 13 19:23:16.147781 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (516) Apr 13 19:23:16.206099 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 19:23:16.264536 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 19:23:16.297633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:16.315770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:16.322590 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:16.335088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:23:16.362603 disk-uuid[659]: Primary Header is updated. Apr 13 19:23:16.362603 disk-uuid[659]: Secondary Entries is updated. Apr 13 19:23:16.362603 disk-uuid[659]: Secondary Header is updated. Apr 13 19:23:16.372777 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:17.393786 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:17.393896 disk-uuid[660]: The operation has completed successfully. Apr 13 19:23:17.581765 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:23:17.584359 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:23:17.647066 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:23:17.666596 sh[921]: Success Apr 13 19:23:17.691758 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:23:17.799050 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:23:17.814967 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:23:17.823904 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:23:17.856758 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:23:17.856823 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:17.856851 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:23:17.859144 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:23:17.859181 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:23:17.986781 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:23:17.988381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:23:17.993045 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:23:18.011984 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:23:18.017398 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:23:18.053786 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:18.053859 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:18.055743 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:18.065826 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:18.086510 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:23:18.089498 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:18.099044 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:23:18.117134 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:23:18.211848 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:18.225085 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:18.296716 systemd-networkd[1113]: lo: Link UP Apr 13 19:23:18.298802 systemd-networkd[1113]: lo: Gained carrier Apr 13 19:23:18.303118 systemd-networkd[1113]: Enumeration completed Apr 13 19:23:18.303283 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:18.308010 systemd[1]: Reached target network.target - Network. Apr 13 19:23:18.314865 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:18.314884 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:18.325306 systemd-networkd[1113]: eth0: Link UP Apr 13 19:23:18.325320 systemd-networkd[1113]: eth0: Gained carrier Apr 13 19:23:18.325338 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:18.349841 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.19.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:18.659958 ignition[1035]: Ignition 2.19.0 Apr 13 19:23:18.659986 ignition[1035]: Stage: fetch-offline Apr 13 19:23:18.664414 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:18.664460 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:18.667053 ignition[1035]: Ignition finished successfully Apr 13 19:23:18.673504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:18.683037 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:23:18.714086 ignition[1122]: Ignition 2.19.0 Apr 13 19:23:18.715883 ignition[1122]: Stage: fetch Apr 13 19:23:18.716584 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:18.716610 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:18.718457 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:18.730513 ignition[1122]: PUT result: OK Apr 13 19:23:18.734114 ignition[1122]: parsed url from cmdline: "" Apr 13 19:23:18.734133 ignition[1122]: no config URL provided Apr 13 19:23:18.734151 ignition[1122]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:23:18.734179 ignition[1122]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:23:18.734218 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:18.741190 ignition[1122]: PUT result: OK Apr 13 19:23:18.741295 ignition[1122]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 19:23:18.748222 ignition[1122]: GET result: OK Apr 13 19:23:18.749101 ignition[1122]: parsing config with SHA512: 2276a316a3e26ed00eaa498e593e40f267537dc58cd997f60ce6d62e3e9edf550971ca1fd9df6bc42bfe088b921b300760d2e0235a04a7579aead835b752ea31 Apr 13 19:23:18.759111 unknown[1122]: fetched base config from "system" Apr 13 19:23:18.760932 unknown[1122]: fetched base config from "system" Apr 13 19:23:18.761018 unknown[1122]: fetched user config from "aws" Apr 13 19:23:18.763347 ignition[1122]: fetch: fetch complete Apr 13 19:23:18.763361 ignition[1122]: fetch: fetch passed Apr 13 19:23:18.763987 ignition[1122]: Ignition finished successfully Apr 13 19:23:18.773426 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:23:18.784022 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:23:18.815829 ignition[1129]: Ignition 2.19.0 Apr 13 19:23:18.817696 ignition[1129]: Stage: kargs Apr 13 19:23:18.819409 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:18.821532 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:18.824235 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:18.827506 ignition[1129]: PUT result: OK Apr 13 19:23:18.832344 ignition[1129]: kargs: kargs passed Apr 13 19:23:18.832476 ignition[1129]: Ignition finished successfully Apr 13 19:23:18.839999 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:23:18.857072 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:23:18.882917 ignition[1135]: Ignition 2.19.0 Apr 13 19:23:18.882946 ignition[1135]: Stage: disks Apr 13 19:23:18.883606 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:18.883634 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:18.883864 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:18.894465 ignition[1135]: PUT result: OK Apr 13 19:23:18.902322 ignition[1135]: disks: disks passed Apr 13 19:23:18.902629 ignition[1135]: Ignition finished successfully Apr 13 19:23:18.907247 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:23:18.917296 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:18.923104 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:23:18.931002 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:18.933282 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:18.935802 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:18.956010 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:23:18.995330 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 19:23:19.000183 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:23:19.011085 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:23:19.096753 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:23:19.097416 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:23:19.101724 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:19.112927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:19.125117 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:23:19.150549 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1163) Apr 13 19:23:19.150607 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:19.150636 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:19.150663 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:19.129984 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 19:23:19.130076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:23:19.130126 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:19.147162 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:23:19.168063 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:23:19.180791 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:19.190410 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:19.610240 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:23:19.620754 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:23:19.629685 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:23:19.639043 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:23:19.809924 systemd-networkd[1113]: eth0: Gained IPv6LL Apr 13 19:23:20.040643 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:20.052936 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:23:20.063174 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:23:20.082786 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:20.082922 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:23:20.118955 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:23:20.134599 ignition[1276]: INFO : Ignition 2.19.0 Apr 13 19:23:20.136702 ignition[1276]: INFO : Stage: mount Apr 13 19:23:20.136702 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:20.136702 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:20.136702 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:20.150932 ignition[1276]: INFO : PUT result: OK Apr 13 19:23:20.155875 ignition[1276]: INFO : mount: mount passed Apr 13 19:23:20.157725 ignition[1276]: INFO : Ignition finished successfully Apr 13 19:23:20.162363 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:23:20.174107 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:23:20.192633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:20.233767 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1288) Apr 13 19:23:20.238014 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:20.238055 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:20.239366 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:20.244771 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:20.248235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:20.284351 ignition[1305]: INFO : Ignition 2.19.0 Apr 13 19:23:20.286542 ignition[1305]: INFO : Stage: files Apr 13 19:23:20.286542 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:20.286542 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:20.286542 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:20.297094 ignition[1305]: INFO : PUT result: OK Apr 13 19:23:20.302629 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:23:20.306326 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:23:20.306326 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:23:20.335497 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:23:20.339113 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:23:20.343242 unknown[1305]: wrote ssh authorized keys file for user: core Apr 13 19:23:20.345923 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:23:20.352235 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:20.352235 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:23:20.459076 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:23:20.617398 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:20.621849 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:20.625980 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:20.630149 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:20.634554 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:20.638591 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:20.642904 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:20.647082 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:20.651148 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:20.656880 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:20.656880 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:20.656880 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:23:20.656880 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:23:20.656880 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:23:20.656880 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 13 19:23:21.159481 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:23:21.556197 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:23:21.556197 ignition[1305]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:21.564498 ignition[1305]: INFO : files: files passed Apr 13 19:23:21.564498 ignition[1305]: INFO : Ignition finished successfully Apr 13 19:23:21.593907 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:23:21.606076 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:23:21.611414 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:23:21.632942 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:23:21.633201 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:23:21.649758 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:21.649758 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:21.656986 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:21.664787 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:21.667888 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:23:21.681024 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:23:21.735174 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:23:21.735361 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:23:21.739025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:23:21.748804 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:23:21.751440 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:23:21.763132 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:23:21.793788 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:21.807068 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:23:21.833909 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:21.837545 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:21.845169 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:23:21.847474 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:23:21.847789 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:21.857526 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:23:21.860239 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:23:21.866515 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:23:21.870358 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:21.873187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:21.883371 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:23:21.885851 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:21.889575 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:23:21.899147 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:23:21.901649 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:23:21.904149 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:23:21.904408 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:21.915401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:21.918110 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:21.920946 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:23:21.923172 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:21.926162 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:23:21.926415 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:21.941463 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:23:21.942613 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:21.950398 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:23:21.950864 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:23:21.970155 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:23:21.973105 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:23:21.973484 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:21.990281 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:23:21.998827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:23:22.003028 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:22.008226 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:23:22.008453 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:22.019315 ignition[1357]: INFO : Ignition 2.19.0 Apr 13 19:23:22.019315 ignition[1357]: INFO : Stage: umount Apr 13 19:23:22.019315 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:22.019315 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:22.030091 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:22.034939 ignition[1357]: INFO : PUT result: OK Apr 13 19:23:22.041184 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:23:22.043697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:23:22.054210 ignition[1357]: INFO : umount: umount passed Apr 13 19:23:22.056928 ignition[1357]: INFO : Ignition finished successfully Apr 13 19:23:22.059207 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:23:22.060071 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:23:22.070513 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:23:22.070686 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:23:22.073527 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:23:22.073645 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:23:22.077510 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:23:22.077624 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:23:22.080018 systemd[1]: Stopped target network.target - Network. Apr 13 19:23:22.082046 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:23:22.082147 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:22.084907 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:23:22.086897 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:23:22.091143 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:22.095332 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:23:22.101074 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:23:22.103528 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:23:22.103613 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:22.108088 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:23:22.108177 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:22.111313 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:23:22.111492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:23:22.114050 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:23:22.114155 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:22.120611 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:23:22.126417 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:23:22.132453 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:23:22.138864 systemd-networkd[1113]: eth0: DHCPv6 lease lost Apr 13 19:23:22.149455 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:23:22.150423 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:23:22.160060 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:23:22.162426 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:23:22.181385 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:23:22.182330 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:23:22.189832 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:23:22.189944 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:22.200124 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:23:22.200237 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:22.219962 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:23:22.222034 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:23:22.222139 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:22.225006 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:23:22.225119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:22.227816 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:23:22.227904 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:22.230369 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:23:22.230451 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:22.234875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:22.276898 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:23:22.279818 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:22.283051 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:23:22.283147 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:22.292776 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:23:22.292860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:22.295868 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:23:22.295957 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:22.299231 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:23:22.299349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:22.300345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:22.300451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:22.328015 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:23:22.330870 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:23:22.331000 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:22.341772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:22.341882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:22.345550 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:23:22.345911 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:23:22.379420 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:23:22.379985 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:23:22.389089 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:23:22.399121 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:23:22.429287 systemd[1]: Switching root. Apr 13 19:23:22.470784 systemd-journald[252]: Journal stopped Apr 13 19:23:24.930345 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Apr 13 19:23:24.930479 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:23:24.930525 kernel: SELinux: policy capability open_perms=1 Apr 13 19:23:24.930557 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:23:24.930589 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:23:24.930621 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:23:24.930662 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:23:24.930708 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:23:24.930781 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:23:24.930817 kernel: audit: type=1403 audit(1776108202.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:23:24.930854 systemd[1]: Successfully loaded SELinux policy in 54.868ms. Apr 13 19:23:24.930894 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.586ms. Apr 13 19:23:24.930930 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:24.930964 systemd[1]: Detected virtualization amazon. Apr 13 19:23:24.930996 systemd[1]: Detected architecture arm64. Apr 13 19:23:24.931029 systemd[1]: Detected first boot. Apr 13 19:23:24.931062 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:24.931099 zram_generator::config[1399]: No configuration found. Apr 13 19:23:24.931145 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:23:24.931179 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:23:24.931212 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:23:24.931246 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:24.931279 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:23:24.931311 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:23:24.931344 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:23:24.931381 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:23:24.931415 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:23:24.931448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:23:24.931480 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:23:24.931513 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:23:24.931546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:24.931576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:24.931609 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:23:24.931640 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:23:24.931677 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:23:24.931711 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:24.934800 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 19:23:24.934845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:24.934879 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:23:24.934915 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:23:24.934948 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:24.934987 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:23:24.935019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:24.935051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:24.935093 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:24.935125 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:24.935158 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:23:24.935192 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:23:24.935226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:24.935257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:24.935288 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:24.935325 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:23:24.935356 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:23:24.935388 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:23:24.935420 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:23:24.935450 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:23:24.935481 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:23:24.935513 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:23:24.935547 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:23:24.935582 systemd[1]: Reached target machines.target - Containers. Apr 13 19:23:24.935616 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:23:24.935650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:24.935691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:24.935722 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:23:24.935854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:24.935889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:24.935923 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:24.935956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:23:24.935991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:24.936026 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:23:24.936058 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:23:24.936088 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:23:24.936119 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:23:24.936149 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:23:24.936178 kernel: fuse: init (API version 7.39) Apr 13 19:23:24.936211 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:24.936241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:24.936275 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:23:24.936307 kernel: loop: module loaded Apr 13 19:23:24.936336 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:23:24.936367 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:24.936399 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:23:24.936429 systemd[1]: Stopped verity-setup.service. Apr 13 19:23:24.936459 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:23:24.936488 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:23:24.936518 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:23:24.936552 kernel: ACPI: bus type drm_connector registered Apr 13 19:23:24.936581 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:23:24.936611 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:23:24.936641 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:23:24.936675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:24.936706 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:23:24.937833 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:23:24.937880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:24.937912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:24.937943 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:24.938020 systemd-journald[1477]: Collecting audit messages is disabled. Apr 13 19:23:24.940968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:24.941012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:24.941046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:24.941077 systemd-journald[1477]: Journal started Apr 13 19:23:24.941126 systemd-journald[1477]: Runtime Journal (/run/log/journal/ec2c1dbf012d6d4b183045b5b3011f17) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:24.304913 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:23:24.333215 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 19:23:24.334057 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:23:24.945827 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:23:24.945922 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:23:24.956823 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:24.956878 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:24.958873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:24.964284 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:24.967409 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:23:24.970943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:23:24.985132 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:23:25.008682 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:23:25.019284 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:23:25.031927 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:23:25.034499 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:23:25.034557 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:25.041861 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:23:25.055996 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:23:25.063238 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:23:25.065999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:25.079065 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:23:25.084866 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:23:25.087906 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:25.095053 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:23:25.098667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:25.102570 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:25.110312 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:23:25.117265 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:23:25.122642 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:23:25.128522 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:23:25.143064 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:23:25.201104 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:23:25.204477 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:23:25.216040 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:23:25.225399 systemd-journald[1477]: Time spent on flushing to /var/log/journal/ec2c1dbf012d6d4b183045b5b3011f17 is 123.358ms for 900 entries. Apr 13 19:23:25.225399 systemd-journald[1477]: System Journal (/var/log/journal/ec2c1dbf012d6d4b183045b5b3011f17) is 8.0M, max 195.6M, 187.6M free. Apr 13 19:23:25.368626 systemd-journald[1477]: Received client request to flush runtime journal. Apr 13 19:23:25.368713 kernel: loop0: detected capacity change from 0 to 114328 Apr 13 19:23:25.368789 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:23:25.310993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:25.331097 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:25.343020 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:23:25.348366 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:23:25.351798 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:23:25.378422 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:23:25.393884 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:23:25.400164 kernel: loop1: detected capacity change from 0 to 200864 Apr 13 19:23:25.410110 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:25.436104 udevadm[1541]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:23:25.476820 systemd-tmpfiles[1548]: ACLs are not supported, ignoring. Apr 13 19:23:25.476860 systemd-tmpfiles[1548]: ACLs are not supported, ignoring. Apr 13 19:23:25.486608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:25.735774 kernel: loop2: detected capacity change from 0 to 114432 Apr 13 19:23:25.861843 kernel: loop3: detected capacity change from 0 to 52536 Apr 13 19:23:25.908991 kernel: loop4: detected capacity change from 0 to 114328 Apr 13 19:23:25.933786 kernel: loop5: detected capacity change from 0 to 200864 Apr 13 19:23:25.973130 kernel: loop6: detected capacity change from 0 to 114432 Apr 13 19:23:25.983779 kernel: loop7: detected capacity change from 0 to 52536 Apr 13 19:23:25.999017 (sd-merge)[1555]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 19:23:26.000607 (sd-merge)[1555]: Merged extensions into '/usr'. Apr 13 19:23:26.008142 systemd[1]: Reloading requested from client PID 1528 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:23:26.008168 systemd[1]: Reloading... Apr 13 19:23:26.202794 zram_generator::config[1584]: No configuration found. Apr 13 19:23:26.242152 ldconfig[1523]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:23:26.487443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:26.598415 systemd[1]: Reloading finished in 589 ms. Apr 13 19:23:26.640791 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:23:26.643967 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:23:26.647338 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:23:26.667074 systemd[1]: Starting ensure-sysext.service... Apr 13 19:23:26.682427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:26.689094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:26.714228 systemd[1]: Reloading requested from client PID 1634 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:23:26.714389 systemd[1]: Reloading... Apr 13 19:23:26.763431 systemd-udevd[1636]: Using default interface naming scheme 'v255'. Apr 13 19:23:26.786982 systemd-tmpfiles[1635]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:23:26.788297 systemd-tmpfiles[1635]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:23:26.793420 systemd-tmpfiles[1635]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:23:26.794103 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Apr 13 19:23:26.794266 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Apr 13 19:23:26.806528 systemd-tmpfiles[1635]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:26.806562 systemd-tmpfiles[1635]: Skipping /boot Apr 13 19:23:26.839139 systemd-tmpfiles[1635]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:26.839168 systemd-tmpfiles[1635]: Skipping /boot Apr 13 19:23:26.939787 zram_generator::config[1671]: No configuration found. Apr 13 19:23:27.080946 (udev-worker)[1662]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:27.284413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:27.417715 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 19:23:27.418902 systemd[1]: Reloading finished in 703 ms. Apr 13 19:23:27.421772 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1688) Apr 13 19:23:27.475659 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:27.501446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:27.553852 systemd[1]: Finished ensure-sysext.service. Apr 13 19:23:27.598337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:27.612414 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:23:27.616786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:27.624270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:27.637168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:27.652051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:27.660096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:27.662752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:27.666660 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:23:27.679122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:27.687621 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:27.690017 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:23:27.709747 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:23:27.717384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:27.721180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:27.722859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:27.731520 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:27.731853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:27.779042 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:23:27.782648 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:23:27.852968 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:27.853844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:27.859669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:27.860910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:27.866462 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:23:27.878444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:27.878626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:27.892035 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:23:27.922813 augenrules[1864]: No rules Apr 13 19:23:27.927362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:27.930976 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:23:27.935872 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:27.955775 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:23:27.964223 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:23:27.966216 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:23:27.973284 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:23:27.978527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:23:28.025778 lvm[1872]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:28.034198 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:23:28.051961 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:23:28.082450 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:23:28.091388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:28.105221 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:23:28.136550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:28.140888 lvm[1886]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:28.176857 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:23:28.213921 systemd-networkd[1842]: lo: Link UP Apr 13 19:23:28.214439 systemd-networkd[1842]: lo: Gained carrier Apr 13 19:23:28.217437 systemd-networkd[1842]: Enumeration completed Apr 13 19:23:28.217812 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:28.218931 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:28.219066 systemd-networkd[1842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:28.224059 systemd-networkd[1842]: eth0: Link UP Apr 13 19:23:28.226294 systemd-networkd[1842]: eth0: Gained carrier Apr 13 19:23:28.226476 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:28.233721 systemd-resolved[1843]: Positive Trust Anchors: Apr 13 19:23:28.233784 systemd-resolved[1843]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:28.233850 systemd-resolved[1843]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:28.236399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:23:28.248886 systemd-networkd[1842]: eth0: DHCPv4 address 172.31.19.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:28.251288 systemd-resolved[1843]: Defaulting to hostname 'linux'. Apr 13 19:23:28.254821 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:28.257643 systemd[1]: Reached target network.target - Network. Apr 13 19:23:28.259856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:28.262542 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:28.265141 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:23:28.268086 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:23:28.271217 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:23:28.274692 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:23:28.277540 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:23:28.280502 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:23:28.280651 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:28.282827 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:28.286228 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:23:28.291975 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:23:28.304095 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:23:28.307701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:23:28.310408 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:28.312849 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:28.315160 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:28.315226 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:28.323937 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:23:28.331087 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:23:28.337201 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:23:28.347245 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:23:28.369065 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:23:28.371816 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:23:28.376513 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:23:28.388052 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 19:23:28.397935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:23:28.408012 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 19:23:28.418481 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:23:28.436842 jq[1897]: false Apr 13 19:23:28.431100 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:23:28.442594 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:23:28.450125 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:23:28.451080 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:23:28.454084 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:23:28.483200 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:23:28.494041 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:23:28.495707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:23:28.514370 dbus-daemon[1896]: [system] SELinux support is enabled Apr 13 19:23:28.516787 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:23:28.528951 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:23:28.529023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:23:28.535564 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:23:28.549999 dbus-daemon[1896]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1842 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:28.535622 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:23:28.562108 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 19:23:28.604135 jq[1909]: true Apr 13 19:23:28.614944 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:23:28.615508 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:23:28.616181 (ntainerd)[1924]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:23:28.644760 extend-filesystems[1898]: Found loop4 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found loop5 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found loop6 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found loop7 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1p1 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1p2 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1p3 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found usr Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1p4 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1p6 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1p7 Apr 13 19:23:28.644760 extend-filesystems[1898]: Found nvme0n1p9 Apr 13 19:23:28.644760 extend-filesystems[1898]: Checking size of /dev/nvme0n1p9 Apr 13 19:23:28.722273 tar[1919]: linux-arm64/LICENSE Apr 13 19:23:28.722273 tar[1919]: linux-arm64/helm Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: ---------------------------------------------------- Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: corporation. Support and training for ntp-4 are Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: available at https://www.nwtime.org/support Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: ---------------------------------------------------- Apr 13 19:23:28.722905 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: proto: precision = 0.096 usec (-23) Apr 13 19:23:28.691632 ntpd[1900]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:28.691681 ntpd[1900]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:28.747682 extend-filesystems[1898]: Resized partition /dev/nvme0n1p9 Apr 13 19:23:28.757443 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 19:23:28.757514 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: basedate set to 2026-04-01 Apr 13 19:23:28.757514 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:28.757514 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:28.691701 ntpd[1900]: ---------------------------------------------------- Apr 13 19:23:28.758026 extend-filesystems[1942]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:23:28.751410 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:23:28.691721 ntpd[1900]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:28.770475 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:28.770475 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:28.752037 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:23:28.691778 ntpd[1900]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:28.691799 ntpd[1900]: corporation. Support and training for ntp-4 are Apr 13 19:23:28.691817 ntpd[1900]: available at https://www.nwtime.org/support Apr 13 19:23:28.691838 ntpd[1900]: ---------------------------------------------------- Apr 13 19:23:28.715507 ntpd[1900]: proto: precision = 0.096 usec (-23) Apr 13 19:23:28.730885 ntpd[1900]: basedate set to 2026-04-01 Apr 13 19:23:28.730923 ntpd[1900]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:28.755633 ntpd[1900]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:28.755720 ntpd[1900]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:28.765837 ntpd[1900]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:28.773481 ntpd[1900]: Listen normally on 3 eth0 172.31.19.12:123 Apr 13 19:23:28.790915 jq[1930]: true Apr 13 19:23:28.791336 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Listen normally on 3 eth0 172.31.19.12:123 Apr 13 19:23:28.791336 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:28.791336 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: bind(21) AF_INET6 fe80::4cb:bdff:febf:cc63%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:28.791336 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: unable to create socket on eth0 (5) for fe80::4cb:bdff:febf:cc63%2#123 Apr 13 19:23:28.791336 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: failed to init interface for address fe80::4cb:bdff:febf:cc63%2 Apr 13 19:23:28.791336 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: Listening on routing socket on fd #21 for interface updates Apr 13 19:23:28.773590 ntpd[1900]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:28.773684 ntpd[1900]: bind(21) AF_INET6 fe80::4cb:bdff:febf:cc63%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:28.773724 ntpd[1900]: unable to create socket on eth0 (5) for fe80::4cb:bdff:febf:cc63%2#123 Apr 13 19:23:28.773774 ntpd[1900]: failed to init interface for address fe80::4cb:bdff:febf:cc63%2 Apr 13 19:23:28.773837 ntpd[1900]: Listening on routing socket on fd #21 for interface updates Apr 13 19:23:28.810645 ntpd[1900]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.820794 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 19:23:28.825989 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.825989 ntpd[1900]: 13 Apr 19:23:28 ntpd[1900]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.817871 ntpd[1900]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.847761 update_engine[1908]: I20260413 19:23:28.844347 1908 main.cc:92] Flatcar Update Engine starting Apr 13 19:23:28.872104 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:23:28.886775 update_engine[1908]: I20260413 19:23:28.882530 1908 update_check_scheduler.cc:74] Next update check in 11m13s Apr 13 19:23:28.891236 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:23:28.949771 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 19:23:28.973830 extend-filesystems[1942]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 19:23:28.973830 extend-filesystems[1942]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 19:23:28.973830 extend-filesystems[1942]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 19:23:28.999823 extend-filesystems[1898]: Resized filesystem in /dev/nvme0n1p9 Apr 13 19:23:28.992506 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:23:29.008871 bash[1966]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:28.993798 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:23:29.011860 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:23:29.035123 systemd[1]: Starting sshkeys.service... Apr 13 19:23:29.092481 systemd-logind[1907]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:23:29.092535 systemd-logind[1907]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 13 19:23:29.095166 systemd-logind[1907]: New seat seat0. Apr 13 19:23:29.100812 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:23:29.105471 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:23:29.132934 coreos-metadata[1895]: Apr 13 19:23:29.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.139 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.142 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.142 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.143 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.148 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.148 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.154 INFO Fetch failed with 404: resource not found Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.154 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.158 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.158 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.158 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.158 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.161 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.162 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.165 INFO Fetch successful Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.165 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 19:23:29.179774 coreos-metadata[1895]: Apr 13 19:23:29.169 INFO Fetch successful Apr 13 19:23:29.171625 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:23:29.261239 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1660) Apr 13 19:23:29.283025 dbus-daemon[1896]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 19:23:29.288521 dbus-daemon[1896]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1920 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:29.288916 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 19:23:29.299764 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 19:23:29.313319 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:23:29.318830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:23:29.339663 polkitd[2004]: Started polkitd version 121 Apr 13 19:23:29.372323 polkitd[2004]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 19:23:29.376160 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 19:23:29.372451 polkitd[2004]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 19:23:29.374995 polkitd[2004]: Finished loading, compiling and executing 2 rules Apr 13 19:23:29.375869 dbus-daemon[1896]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 19:23:29.380889 polkitd[2004]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 19:23:29.451311 locksmithd[1957]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:23:29.465979 systemd-hostnamed[1920]: Hostname set to (transient) Apr 13 19:23:29.467364 systemd-resolved[1843]: System hostname changed to 'ip-172-31-19-12'. Apr 13 19:23:29.519266 coreos-metadata[1976]: Apr 13 19:23:29.517 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:29.525309 coreos-metadata[1976]: Apr 13 19:23:29.521 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 19:23:29.528097 coreos-metadata[1976]: Apr 13 19:23:29.528 INFO Fetch successful Apr 13 19:23:29.528097 coreos-metadata[1976]: Apr 13 19:23:29.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 19:23:29.530210 coreos-metadata[1976]: Apr 13 19:23:29.529 INFO Fetch successful Apr 13 19:23:29.533697 unknown[1976]: wrote ssh authorized keys file for user: core Apr 13 19:23:29.578666 update-ssh-keys[2044]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:29.584880 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:23:29.594817 systemd[1]: Finished sshkeys.service. Apr 13 19:23:29.693075 ntpd[1900]: bind(24) AF_INET6 fe80::4cb:bdff:febf:cc63%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:29.694335 ntpd[1900]: 13 Apr 19:23:29 ntpd[1900]: bind(24) AF_INET6 fe80::4cb:bdff:febf:cc63%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:29.694335 ntpd[1900]: 13 Apr 19:23:29 ntpd[1900]: unable to create socket on eth0 (6) for fe80::4cb:bdff:febf:cc63%2#123 Apr 13 19:23:29.694335 ntpd[1900]: 13 Apr 19:23:29 ntpd[1900]: failed to init interface for address fe80::4cb:bdff:febf:cc63%2 Apr 13 19:23:29.693165 ntpd[1900]: unable to create socket on eth0 (6) for fe80::4cb:bdff:febf:cc63%2#123 Apr 13 19:23:29.693196 ntpd[1900]: failed to init interface for address fe80::4cb:bdff:febf:cc63%2 Apr 13 19:23:29.768181 containerd[1924]: time="2026-04-13T19:23:29.767172528Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:23:29.924862 systemd-networkd[1842]: eth0: Gained IPv6LL Apr 13 19:23:29.939464 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:23:29.943620 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:23:29.969057 containerd[1924]: time="2026-04-13T19:23:29.968651425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:29.970335 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 19:23:29.979757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:29.989684 containerd[1924]: time="2026-04-13T19:23:29.989154217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:29.989684 containerd[1924]: time="2026-04-13T19:23:29.989228929Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:23:29.989684 containerd[1924]: time="2026-04-13T19:23:29.989265469Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:23:29.989684 containerd[1924]: time="2026-04-13T19:23:29.989578657Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:23:29.989684 containerd[1924]: time="2026-04-13T19:23:29.989616037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:29.991332 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:23:30.002984 containerd[1924]: time="2026-04-13T19:23:29.989813617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.002984 containerd[1924]: time="2026-04-13T19:23:30.001898805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.011755 containerd[1924]: time="2026-04-13T19:23:30.010881118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.011755 containerd[1924]: time="2026-04-13T19:23:30.010959898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.011755 containerd[1924]: time="2026-04-13T19:23:30.010997794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.011755 containerd[1924]: time="2026-04-13T19:23:30.011048638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.012423 containerd[1924]: time="2026-04-13T19:23:30.012364762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.029476 containerd[1924]: time="2026-04-13T19:23:30.027364702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.029476 containerd[1924]: time="2026-04-13T19:23:30.027645478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.029476 containerd[1924]: time="2026-04-13T19:23:30.027678790Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:23:30.029476 containerd[1924]: time="2026-04-13T19:23:30.027956134Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:23:30.029476 containerd[1924]: time="2026-04-13T19:23:30.028084318Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:23:30.041785 containerd[1924]: time="2026-04-13T19:23:30.041588782Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:23:30.041785 containerd[1924]: time="2026-04-13T19:23:30.041702842Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:23:30.042275 containerd[1924]: time="2026-04-13T19:23:30.042147706Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:23:30.042517 containerd[1924]: time="2026-04-13T19:23:30.042483838Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.043514002Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.043847578Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044232694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044445634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044481370Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044512258Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044543770Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044575966Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044605870Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044640502Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.046690 containerd[1924]: time="2026-04-13T19:23:30.044679610Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.044713714Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051013006Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051057766Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051103738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051153406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051187150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051220102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051249826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051280642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051310510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051341122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051377878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051413662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.051945 containerd[1924]: time="2026-04-13T19:23:30.051444058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.052616 containerd[1924]: time="2026-04-13T19:23:30.051491698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.052616 containerd[1924]: time="2026-04-13T19:23:30.051522238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.052616 containerd[1924]: time="2026-04-13T19:23:30.051563134Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:23:30.052616 containerd[1924]: time="2026-04-13T19:23:30.051618430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.052616 containerd[1924]: time="2026-04-13T19:23:30.051649726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.052616 containerd[1924]: time="2026-04-13T19:23:30.051677026Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058204450Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058279822Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058307038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058340002Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058366174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058395814Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058419514Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:23:30.060154 containerd[1924]: time="2026-04-13T19:23:30.058444906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.064130 containerd[1924]: time="2026-04-13T19:23:30.062167510Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:23:30.064130 containerd[1924]: time="2026-04-13T19:23:30.062303470Z" level=info msg="Connect containerd service" Apr 13 19:23:30.064130 containerd[1924]: time="2026-04-13T19:23:30.062373574Z" level=info msg="using legacy CRI server" Apr 13 19:23:30.064130 containerd[1924]: time="2026-04-13T19:23:30.062391982Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:23:30.074295 containerd[1924]: time="2026-04-13T19:23:30.071174458Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:23:30.087815 containerd[1924]: time="2026-04-13T19:23:30.086310862Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:23:30.090072 containerd[1924]: time="2026-04-13T19:23:30.089984482Z" level=info msg="Start subscribing containerd event" Apr 13 19:23:30.092913 containerd[1924]: time="2026-04-13T19:23:30.092860858Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:23:30.097006 containerd[1924]: time="2026-04-13T19:23:30.096946786Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:23:30.105751 containerd[1924]: time="2026-04-13T19:23:30.095960650Z" level=info msg="Start recovering state" Apr 13 19:23:30.105751 containerd[1924]: time="2026-04-13T19:23:30.102073894Z" level=info msg="Start event monitor" Apr 13 19:23:30.105751 containerd[1924]: time="2026-04-13T19:23:30.102105178Z" level=info msg="Start snapshots syncer" Apr 13 19:23:30.105751 containerd[1924]: time="2026-04-13T19:23:30.102128878Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:23:30.105751 containerd[1924]: time="2026-04-13T19:23:30.102151690Z" level=info msg="Start streaming server" Apr 13 19:23:30.105751 containerd[1924]: time="2026-04-13T19:23:30.102553498Z" level=info msg="containerd successfully booted in 0.340971s" Apr 13 19:23:30.102704 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:23:30.156000 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:23:30.162830 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:23:30.179180 amazon-ssm-agent[2098]: Initializing new seelog logger Apr 13 19:23:30.179970 amazon-ssm-agent[2098]: New Seelog Logger Creation Complete Apr 13 19:23:30.180152 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.180230 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.181246 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 processing appconfig overrides Apr 13 19:23:30.181842 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.181963 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.183110 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO Proxy environment variables: Apr 13 19:23:30.183543 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 processing appconfig overrides Apr 13 19:23:30.184111 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.184111 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.184560 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 processing appconfig overrides Apr 13 19:23:30.193171 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.193171 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.193171 amazon-ssm-agent[2098]: 2026/04/13 19:23:30 processing appconfig overrides Apr 13 19:23:30.286127 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO https_proxy: Apr 13 19:23:30.387816 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO http_proxy: Apr 13 19:23:30.486333 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO no_proxy: Apr 13 19:23:30.584864 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO Checking if agent identity type OnPrem can be assumed Apr 13 19:23:30.686751 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO Checking if agent identity type EC2 can be assumed Apr 13 19:23:30.784889 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO Agent will take identity from EC2 Apr 13 19:23:30.901447 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:30.901638 tar[1919]: linux-arm64/README.md Apr 13 19:23:30.945424 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:23:30.999264 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:31.098761 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:31.198074 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [Registrar] Starting registrar module Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:31 INFO [EC2Identity] EC2 registration was successful. Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:31 INFO [CredentialRefresher] credentialRefresher has started Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:31 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 19:23:31.267506 amazon-ssm-agent[2098]: 2026-04-13 19:23:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 19:23:31.299061 amazon-ssm-agent[2098]: 2026-04-13 19:23:31 INFO [CredentialRefresher] Next credential rotation will be in 32.20831392183333 minutes Apr 13 19:23:31.379061 sshd_keygen[1939]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:23:31.419907 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:23:31.432238 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:23:31.440127 systemd[1]: Started sshd@0-172.31.19.12:22-4.175.71.9:38250.service - OpenSSH per-connection server daemon (4.175.71.9:38250). Apr 13 19:23:31.451503 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:23:31.452191 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:23:31.466064 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:23:31.491499 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:23:31.503521 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:23:31.510291 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 19:23:31.513192 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:23:32.296412 amazon-ssm-agent[2098]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 19:23:32.398115 amazon-ssm-agent[2098]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2141) started Apr 13 19:23:32.477869 sshd[2131]: Accepted publickey for core from 4.175.71.9 port 38250 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:32.480368 sshd[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:32.498403 amazon-ssm-agent[2098]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 19:23:32.512592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:23:32.522274 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:23:32.538901 systemd-logind[1907]: New session 1 of user core. Apr 13 19:23:32.566491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:23:32.587639 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:23:32.603820 (systemd)[2153]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:23:32.693105 ntpd[1900]: Listen normally on 7 eth0 [fe80::4cb:bdff:febf:cc63%2]:123 Apr 13 19:23:32.696247 ntpd[1900]: 13 Apr 19:23:32 ntpd[1900]: Listen normally on 7 eth0 [fe80::4cb:bdff:febf:cc63%2]:123 Apr 13 19:23:32.839886 systemd[2153]: Queued start job for default target default.target. Apr 13 19:23:32.853723 systemd[2153]: Created slice app.slice - User Application Slice. Apr 13 19:23:32.853818 systemd[2153]: Reached target paths.target - Paths. Apr 13 19:23:32.853853 systemd[2153]: Reached target timers.target - Timers. Apr 13 19:23:32.856356 systemd[2153]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:23:32.879467 systemd[2153]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:23:32.879922 systemd[2153]: Reached target sockets.target - Sockets. Apr 13 19:23:32.879963 systemd[2153]: Reached target basic.target - Basic System. Apr 13 19:23:32.880050 systemd[2153]: Reached target default.target - Main User Target. Apr 13 19:23:32.880113 systemd[2153]: Startup finished in 263ms. Apr 13 19:23:32.880719 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:23:32.893045 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:23:33.600223 systemd[1]: Started sshd@1-172.31.19.12:22-4.175.71.9:38256.service - OpenSSH per-connection server daemon (4.175.71.9:38256). Apr 13 19:23:33.848025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:33.851801 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:23:33.857052 systemd[1]: Startup finished in 1.293s (kernel) + 9.034s (initrd) + 11.046s (userspace) = 21.374s. Apr 13 19:23:33.863427 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:34.563208 sshd[2164]: Accepted publickey for core from 4.175.71.9 port 38256 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:34.565954 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:34.575097 systemd-logind[1907]: New session 2 of user core. Apr 13 19:23:34.583033 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:23:35.229658 sshd[2164]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:35.236463 systemd[1]: sshd@1-172.31.19.12:22-4.175.71.9:38256.service: Deactivated successfully. Apr 13 19:23:35.240224 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:23:35.244324 systemd-logind[1907]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:23:35.247039 systemd-logind[1907]: Removed session 2. Apr 13 19:23:35.418122 systemd[1]: Started sshd@2-172.31.19.12:22-4.175.71.9:33036.service - OpenSSH per-connection server daemon (4.175.71.9:33036). Apr 13 19:23:35.839033 kubelet[2171]: E0413 19:23:35.838975 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:35.844454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:35.845426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:35.847897 systemd[1]: kubelet.service: Consumed 1.300s CPU time. Apr 13 19:23:36.425476 sshd[2185]: Accepted publickey for core from 4.175.71.9 port 33036 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:36.427218 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:36.434862 systemd-logind[1907]: New session 3 of user core. Apr 13 19:23:36.447999 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:23:37.108681 sshd[2185]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:37.116301 systemd[1]: sshd@2-172.31.19.12:22-4.175.71.9:33036.service: Deactivated successfully. Apr 13 19:23:37.120272 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:23:37.121505 systemd-logind[1907]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:23:37.123630 systemd-logind[1907]: Removed session 3. Apr 13 19:23:37.282204 systemd[1]: Started sshd@3-172.31.19.12:22-4.175.71.9:33040.service - OpenSSH per-connection server daemon (4.175.71.9:33040). Apr 13 19:23:38.248271 sshd[2194]: Accepted publickey for core from 4.175.71.9 port 33040 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:38.251351 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:38.258528 systemd-logind[1907]: New session 4 of user core. Apr 13 19:23:38.268018 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:23:38.921457 sshd[2194]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:38.926669 systemd[1]: sshd@3-172.31.19.12:22-4.175.71.9:33040.service: Deactivated successfully. Apr 13 19:23:38.929962 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:23:38.934059 systemd-logind[1907]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:23:38.936058 systemd-logind[1907]: Removed session 4. Apr 13 19:23:39.116206 systemd[1]: Started sshd@4-172.31.19.12:22-4.175.71.9:33046.service - OpenSSH per-connection server daemon (4.175.71.9:33046). Apr 13 19:23:40.143698 sshd[2201]: Accepted publickey for core from 4.175.71.9 port 33046 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:40.146409 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:40.154402 systemd-logind[1907]: New session 5 of user core. Apr 13 19:23:40.164012 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:23:40.714127 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:23:40.715468 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:40.731840 sudo[2204]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:40.899414 sshd[2201]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:40.904880 systemd[1]: sshd@4-172.31.19.12:22-4.175.71.9:33046.service: Deactivated successfully. Apr 13 19:23:40.907850 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:23:40.911460 systemd-logind[1907]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:23:40.913571 systemd-logind[1907]: Removed session 5. Apr 13 19:23:41.072246 systemd[1]: Started sshd@5-172.31.19.12:22-4.175.71.9:33050.service - OpenSSH per-connection server daemon (4.175.71.9:33050). Apr 13 19:23:42.041142 sshd[2209]: Accepted publickey for core from 4.175.71.9 port 33050 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:42.042923 sshd[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:42.051837 systemd-logind[1907]: New session 6 of user core. Apr 13 19:23:42.062002 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:23:42.556113 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:23:42.556802 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:42.563396 sudo[2213]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:42.573633 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:23:42.574346 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:42.597362 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:42.612396 auditctl[2216]: No rules Apr 13 19:23:42.613454 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:23:42.613816 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:42.625126 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:42.682926 augenrules[2235]: No rules Apr 13 19:23:42.685611 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:42.688064 sudo[2212]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:42.844892 sshd[2209]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:42.851911 systemd[1]: sshd@5-172.31.19.12:22-4.175.71.9:33050.service: Deactivated successfully. Apr 13 19:23:42.852641 systemd-logind[1907]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:23:42.856589 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:23:42.860614 systemd-logind[1907]: Removed session 6. Apr 13 19:23:43.014235 systemd[1]: Started sshd@6-172.31.19.12:22-4.175.71.9:33052.service - OpenSSH per-connection server daemon (4.175.71.9:33052). Apr 13 19:23:43.984639 sshd[2243]: Accepted publickey for core from 4.175.71.9 port 33052 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:43.987275 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:43.996756 systemd-logind[1907]: New session 7 of user core. Apr 13 19:23:44.003080 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:23:44.497520 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:23:44.498824 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:45.003260 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:23:45.012241 (dockerd)[2261]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:23:45.421059 dockerd[2261]: time="2026-04-13T19:23:45.420873988Z" level=info msg="Starting up" Apr 13 19:23:45.556018 dockerd[2261]: time="2026-04-13T19:23:45.555704863Z" level=info msg="Loading containers: start." Apr 13 19:23:45.706773 kernel: Initializing XFRM netlink socket Apr 13 19:23:45.741778 (udev-worker)[2283]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:45.825928 systemd-networkd[1842]: docker0: Link UP Apr 13 19:23:45.846137 dockerd[2261]: time="2026-04-13T19:23:45.846066217Z" level=info msg="Loading containers: done." Apr 13 19:23:45.871561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:45.876866 dockerd[2261]: time="2026-04-13T19:23:45.873925432Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:23:45.876866 dockerd[2261]: time="2026-04-13T19:23:45.874095557Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:23:45.876866 dockerd[2261]: time="2026-04-13T19:23:45.874298242Z" level=info msg="Daemon has completed initialization" Apr 13 19:23:45.880377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:45.943536 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:23:45.943897 dockerd[2261]: time="2026-04-13T19:23:45.943409401Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:23:46.256246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:46.256265 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:46.331161 kubelet[2404]: E0413 19:23:46.331074 2404 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:46.339903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:46.340426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:47.600813 containerd[1924]: time="2026-04-13T19:23:47.600428200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 19:23:48.268038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963227244.mount: Deactivated successfully. Apr 13 19:23:49.610476 containerd[1924]: time="2026-04-13T19:23:49.610414481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:49.613293 containerd[1924]: time="2026-04-13T19:23:49.612510260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=24476890" Apr 13 19:23:49.613293 containerd[1924]: time="2026-04-13T19:23:49.613226670Z" level=info msg="ImageCreate event name:\"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:49.619185 containerd[1924]: time="2026-04-13T19:23:49.619134439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:49.621676 containerd[1924]: time="2026-04-13T19:23:49.621627388Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"24473489\" in 2.021124955s" Apr 13 19:23:49.621882 containerd[1924]: time="2026-04-13T19:23:49.621852057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\"" Apr 13 19:23:49.623510 containerd[1924]: time="2026-04-13T19:23:49.623461485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 19:23:50.941793 containerd[1924]: time="2026-04-13T19:23:50.941549809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.944939 containerd[1924]: time="2026-04-13T19:23:50.944878809Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=19139642" Apr 13 19:23:50.946059 containerd[1924]: time="2026-04-13T19:23:50.945992809Z" level=info msg="ImageCreate event name:\"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.951754 containerd[1924]: time="2026-04-13T19:23:50.951646879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.954154 containerd[1924]: time="2026-04-13T19:23:50.954090989Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"20617664\" in 1.33057017s" Apr 13 19:23:50.954576 containerd[1924]: time="2026-04-13T19:23:50.954293338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\"" Apr 13 19:23:50.955513 containerd[1924]: time="2026-04-13T19:23:50.955232963Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 19:23:52.079164 containerd[1924]: time="2026-04-13T19:23:52.079106203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:52.082093 containerd[1924]: time="2026-04-13T19:23:52.082040769Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=14195539" Apr 13 19:23:52.083923 containerd[1924]: time="2026-04-13T19:23:52.083851334Z" level=info msg="ImageCreate event name:\"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:52.091926 containerd[1924]: time="2026-04-13T19:23:52.091843548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:52.094588 containerd[1924]: time="2026-04-13T19:23:52.094125733Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"15673579\" in 1.138840867s" Apr 13 19:23:52.094588 containerd[1924]: time="2026-04-13T19:23:52.094186880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\"" Apr 13 19:23:52.095328 containerd[1924]: time="2026-04-13T19:23:52.095261849Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 19:23:53.403834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount102116346.mount: Deactivated successfully. Apr 13 19:23:53.849889 containerd[1924]: time="2026-04-13T19:23:53.849686839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.851974 containerd[1924]: time="2026-04-13T19:23:53.849364118Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=22697099" Apr 13 19:23:53.852289 containerd[1924]: time="2026-04-13T19:23:53.852241104Z" level=info msg="ImageCreate event name:\"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.858505 containerd[1924]: time="2026-04-13T19:23:53.858451243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.859947 containerd[1924]: time="2026-04-13T19:23:53.859881446Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"22696118\" in 1.764554176s" Apr 13 19:23:53.860039 containerd[1924]: time="2026-04-13T19:23:53.859945774Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\"" Apr 13 19:23:53.861178 containerd[1924]: time="2026-04-13T19:23:53.861139086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 19:23:54.490553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133541619.mount: Deactivated successfully. Apr 13 19:23:55.799257 containerd[1924]: time="2026-04-13T19:23:55.799172736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.803681 containerd[1924]: time="2026-04-13T19:23:55.803613816Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Apr 13 19:23:55.807317 containerd[1924]: time="2026-04-13T19:23:55.807244226Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.817121 containerd[1924]: time="2026-04-13T19:23:55.817032694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.819435 containerd[1924]: time="2026-04-13T19:23:55.819385822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.957995498s" Apr 13 19:23:55.820113 containerd[1924]: time="2026-04-13T19:23:55.819546270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 13 19:23:55.821031 containerd[1924]: time="2026-04-13T19:23:55.820977470Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 19:23:56.322602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631875284.mount: Deactivated successfully. Apr 13 19:23:56.338836 containerd[1924]: time="2026-04-13T19:23:56.337567907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.340466 containerd[1924]: time="2026-04-13T19:23:56.340412381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 13 19:23:56.342894 containerd[1924]: time="2026-04-13T19:23:56.342853093Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.349984 containerd[1924]: time="2026-04-13T19:23:56.349917603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.351860 containerd[1924]: time="2026-04-13T19:23:56.351811994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 530.775059ms" Apr 13 19:23:56.352045 containerd[1924]: time="2026-04-13T19:23:56.352012242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 13 19:23:56.352937 containerd[1924]: time="2026-04-13T19:23:56.352837955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 19:23:56.590589 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:23:56.598129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:57.026224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:57.038229 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:57.039520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556874931.mount: Deactivated successfully. Apr 13 19:23:57.135428 kubelet[2557]: E0413 19:23:57.135375 2557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:57.141129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:57.141473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:58.731881 containerd[1924]: time="2026-04-13T19:23:58.731796208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:58.779178 containerd[1924]: time="2026-04-13T19:23:58.778680611Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139072" Apr 13 19:23:58.823844 containerd[1924]: time="2026-04-13T19:23:58.823777224Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:58.866850 containerd[1924]: time="2026-04-13T19:23:58.866793402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:58.869885 containerd[1924]: time="2026-04-13T19:23:58.869493695Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.516314206s" Apr 13 19:23:58.869885 containerd[1924]: time="2026-04-13T19:23:58.869548743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 13 19:23:59.503101 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 19:24:07.327410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:24:07.339163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:07.867234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:07.880774 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:07.969357 kubelet[2655]: E0413 19:24:07.969229 2655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:07.974752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:07.975296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:08.260622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:08.274550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:08.333144 systemd[1]: Reloading requested from client PID 2669 ('systemctl') (unit session-7.scope)... Apr 13 19:24:08.333179 systemd[1]: Reloading... Apr 13 19:24:08.576899 zram_generator::config[2709]: No configuration found. Apr 13 19:24:08.860614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:09.041571 systemd[1]: Reloading finished in 707 ms. Apr 13 19:24:09.136981 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:24:09.137191 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:24:09.137674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:09.143607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:09.505071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:09.516383 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:09.604215 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:09.604215 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:09.605507 kubelet[2772]: I0413 19:24:09.605393 2772 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:12.157950 kubelet[2772]: I0413 19:24:12.157717 2772 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:24:12.157950 kubelet[2772]: I0413 19:24:12.157816 2772 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:12.157950 kubelet[2772]: I0413 19:24:12.157874 2772 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:24:12.157950 kubelet[2772]: I0413 19:24:12.157891 2772 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:12.158998 kubelet[2772]: I0413 19:24:12.158339 2772 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:12.177073 kubelet[2772]: E0413 19:24:12.176817 2772 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:12.180344 kubelet[2772]: I0413 19:24:12.180257 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:12.187697 kubelet[2772]: E0413 19:24:12.186786 2772 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:12.187697 kubelet[2772]: I0413 19:24:12.187007 2772 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:12.193370 kubelet[2772]: I0413 19:24:12.193324 2772 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:24:12.194258 kubelet[2772]: I0413 19:24:12.194201 2772 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:12.194761 kubelet[2772]: I0413 19:24:12.194440 2772 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:24:12.195179 kubelet[2772]: I0413 19:24:12.195139 2772 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:12.195325 kubelet[2772]: I0413 19:24:12.195305 2772 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:24:12.196068 kubelet[2772]: I0413 19:24:12.195598 2772 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:24:12.198010 kubelet[2772]: I0413 19:24:12.197582 2772 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:12.200171 kubelet[2772]: I0413 19:24:12.200126 2772 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:24:12.200422 kubelet[2772]: I0413 19:24:12.200394 2772 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:12.200600 kubelet[2772]: I0413 19:24:12.200577 2772 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:24:12.200796 kubelet[2772]: I0413 19:24:12.200768 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:12.204121 kubelet[2772]: E0413 19:24:12.203334 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-12&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:12.204121 kubelet[2772]: E0413 19:24:12.203631 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:12.204641 kubelet[2772]: I0413 19:24:12.204596 2772 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:12.205952 kubelet[2772]: I0413 19:24:12.205907 2772 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:12.206784 kubelet[2772]: I0413 19:24:12.206151 2772 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:24:12.206784 kubelet[2772]: W0413 19:24:12.206224 2772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:24:12.214334 kubelet[2772]: I0413 19:24:12.212847 2772 server.go:1262] "Started kubelet" Apr 13 19:24:12.215081 kubelet[2772]: I0413 19:24:12.215024 2772 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:12.217161 kubelet[2772]: I0413 19:24:12.217119 2772 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:24:12.220785 kubelet[2772]: I0413 19:24:12.218949 2772 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:12.220785 kubelet[2772]: I0413 19:24:12.219083 2772 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:24:12.220785 kubelet[2772]: I0413 19:24:12.219626 2772 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:12.222874 kubelet[2772]: E0413 19:24:12.219953 2772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.12:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-12.18a6010b8fc89a25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-12,UID:ip-172-31-19-12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-12,},FirstTimestamp:2026-04-13 19:24:12.212795941 +0000 UTC m=+2.689592160,LastTimestamp:2026-04-13 19:24:12.212795941 +0000 UTC m=+2.689592160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-12,}" Apr 13 19:24:12.230253 kubelet[2772]: E0413 19:24:12.230193 2772 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:12.233483 kubelet[2772]: I0413 19:24:12.232297 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:12.233483 kubelet[2772]: I0413 19:24:12.231932 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:12.238378 kubelet[2772]: I0413 19:24:12.238312 2772 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:24:12.238595 kubelet[2772]: I0413 19:24:12.238554 2772 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:24:12.238677 kubelet[2772]: I0413 19:24:12.238657 2772 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:24:12.239852 kubelet[2772]: E0413 19:24:12.239563 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:12.244337 kubelet[2772]: E0413 19:24:12.242927 2772 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Apr 13 19:24:12.244337 kubelet[2772]: E0413 19:24:12.243137 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="200ms" Apr 13 19:24:12.244337 kubelet[2772]: I0413 19:24:12.243365 2772 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:12.244337 kubelet[2772]: I0413 19:24:12.243396 2772 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:12.244337 kubelet[2772]: I0413 19:24:12.243562 2772 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:12.275953 kubelet[2772]: I0413 19:24:12.275883 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:12.285890 kubelet[2772]: I0413 19:24:12.285814 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:12.285890 kubelet[2772]: I0413 19:24:12.285860 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:12.285890 kubelet[2772]: I0413 19:24:12.285898 2772 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:12.290294 kubelet[2772]: I0413 19:24:12.290237 2772 policy_none.go:49] "None policy: Start" Apr 13 19:24:12.290456 kubelet[2772]: I0413 19:24:12.290310 2772 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:24:12.290456 kubelet[2772]: I0413 19:24:12.290341 2772 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:24:12.295041 kubelet[2772]: I0413 19:24:12.294992 2772 policy_none.go:47] "Start" Apr 13 19:24:12.306963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:24:12.331367 kubelet[2772]: I0413 19:24:12.328356 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:12.331367 kubelet[2772]: I0413 19:24:12.328412 2772 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:24:12.331367 kubelet[2772]: I0413 19:24:12.328454 2772 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:24:12.331367 kubelet[2772]: E0413 19:24:12.328535 2772 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:12.330647 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:24:12.333940 kubelet[2772]: E0413 19:24:12.333852 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:12.343115 kubelet[2772]: E0413 19:24:12.343055 2772 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Apr 13 19:24:12.344129 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:24:12.358712 kubelet[2772]: E0413 19:24:12.358621 2772 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:12.361718 kubelet[2772]: I0413 19:24:12.360553 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:12.361718 kubelet[2772]: I0413 19:24:12.360596 2772 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:12.361718 kubelet[2772]: I0413 19:24:12.361127 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:12.365956 kubelet[2772]: E0413 19:24:12.365891 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:12.366125 kubelet[2772]: E0413 19:24:12.365980 2772 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-12\" not found" Apr 13 19:24:12.444903 kubelet[2772]: I0413 19:24:12.441998 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5bdd6043a5e9b27e782f079f3d7051a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"c5bdd6043a5e9b27e782f079f3d7051a\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:12.444903 kubelet[2772]: I0413 19:24:12.442085 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:12.444903 kubelet[2772]: I0413 19:24:12.442163 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:12.444903 kubelet[2772]: I0413 19:24:12.442207 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:12.444903 kubelet[2772]: I0413 19:24:12.442300 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:12.446131 kubelet[2772]: I0413 19:24:12.444980 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:12.446131 kubelet[2772]: I0413 19:24:12.445064 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5bdd6043a5e9b27e782f079f3d7051a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"c5bdd6043a5e9b27e782f079f3d7051a\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:12.446131 kubelet[2772]: I0413 19:24:12.445128 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5bdd6043a5e9b27e782f079f3d7051a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"c5bdd6043a5e9b27e782f079f3d7051a\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:12.446131 kubelet[2772]: E0413 19:24:12.446024 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="400ms" Apr 13 19:24:12.458246 systemd[1]: Created slice kubepods-burstable-podc5bdd6043a5e9b27e782f079f3d7051a.slice - libcontainer container kubepods-burstable-podc5bdd6043a5e9b27e782f079f3d7051a.slice. Apr 13 19:24:12.465114 kubelet[2772]: I0413 19:24:12.464815 2772 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Apr 13 19:24:12.466337 kubelet[2772]: E0413 19:24:12.465565 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Apr 13 19:24:12.477684 kubelet[2772]: E0413 19:24:12.477609 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:12.484773 systemd[1]: Created slice kubepods-burstable-podccb70822fd7bd771664548fdb26c93e2.slice - libcontainer container kubepods-burstable-podccb70822fd7bd771664548fdb26c93e2.slice. Apr 13 19:24:12.491471 kubelet[2772]: E0413 19:24:12.491201 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:12.496467 systemd[1]: Created slice kubepods-burstable-pod0fc285bbd6314585ecdf0ddab3965583.slice - libcontainer container kubepods-burstable-pod0fc285bbd6314585ecdf0ddab3965583.slice. Apr 13 19:24:12.501442 kubelet[2772]: E0413 19:24:12.501377 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:12.545962 kubelet[2772]: I0413 19:24:12.545779 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccb70822fd7bd771664548fdb26c93e2-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-12\" (UID: \"ccb70822fd7bd771664548fdb26c93e2\") " pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:12.669152 kubelet[2772]: I0413 19:24:12.669071 2772 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Apr 13 19:24:12.669690 kubelet[2772]: E0413 19:24:12.669587 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Apr 13 19:24:12.782538 containerd[1924]: time="2026-04-13T19:24:12.782339142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-12,Uid:c5bdd6043a5e9b27e782f079f3d7051a,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:12.796558 containerd[1924]: time="2026-04-13T19:24:12.796142308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-12,Uid:ccb70822fd7bd771664548fdb26c93e2,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:12.805376 containerd[1924]: time="2026-04-13T19:24:12.805307149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-12,Uid:0fc285bbd6314585ecdf0ddab3965583,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:12.847817 kubelet[2772]: E0413 19:24:12.847605 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="800ms" Apr 13 19:24:12.944280 kubelet[2772]: E0413 19:24:12.944027 2772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.12:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-12.18a6010b8fc89a25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-12,UID:ip-172-31-19-12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-12,},FirstTimestamp:2026-04-13 19:24:12.212795941 +0000 UTC m=+2.689592160,LastTimestamp:2026-04-13 19:24:12.212795941 +0000 UTC m=+2.689592160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-12,}" Apr 13 19:24:13.073661 kubelet[2772]: I0413 19:24:13.073243 2772 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Apr 13 19:24:13.074887 kubelet[2772]: E0413 19:24:13.074790 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Apr 13 19:24:13.198268 kubelet[2772]: E0413 19:24:13.198206 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-12&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:13.226890 kubelet[2772]: E0413 19:24:13.226276 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:13.381537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272835002.mount: Deactivated successfully. Apr 13 19:24:13.390817 containerd[1924]: time="2026-04-13T19:24:13.390572879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.392833 containerd[1924]: time="2026-04-13T19:24:13.392696260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.395790 containerd[1924]: time="2026-04-13T19:24:13.395363225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:13.396180 containerd[1924]: time="2026-04-13T19:24:13.396090321Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.398061 containerd[1924]: time="2026-04-13T19:24:13.397830242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:13.398061 containerd[1924]: time="2026-04-13T19:24:13.397911259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 19:24:13.403560 kubelet[2772]: E0413 19:24:13.403242 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:13.403699 containerd[1924]: time="2026-04-13T19:24:13.403458583Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.408650 containerd[1924]: time="2026-04-13T19:24:13.408555562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.112062ms" Apr 13 19:24:13.411770 containerd[1924]: time="2026-04-13T19:24:13.410956318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.416929 containerd[1924]: time="2026-04-13T19:24:13.416859152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 634.375242ms" Apr 13 19:24:13.418817 containerd[1924]: time="2026-04-13T19:24:13.418718353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 622.455517ms" Apr 13 19:24:13.621286 containerd[1924]: time="2026-04-13T19:24:13.619808325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:13.621286 containerd[1924]: time="2026-04-13T19:24:13.620621384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:13.621286 containerd[1924]: time="2026-04-13T19:24:13.620688161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.621286 containerd[1924]: time="2026-04-13T19:24:13.620967289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.627297 containerd[1924]: time="2026-04-13T19:24:13.626292094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:13.627814 containerd[1924]: time="2026-04-13T19:24:13.627212882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:13.627814 containerd[1924]: time="2026-04-13T19:24:13.627312243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.629550 containerd[1924]: time="2026-04-13T19:24:13.627901066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.634472 containerd[1924]: time="2026-04-13T19:24:13.634156108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:13.634472 containerd[1924]: time="2026-04-13T19:24:13.634292004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:13.636249 containerd[1924]: time="2026-04-13T19:24:13.636126460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.639124 containerd[1924]: time="2026-04-13T19:24:13.638915983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.649333 kubelet[2772]: E0413 19:24:13.649271 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="1.6s" Apr 13 19:24:13.674085 systemd[1]: Started cri-containerd-8c835a3561ff312c0dca3398dfbc8418592b43f7a92c695f67edde2dabfc0191.scope - libcontainer container 8c835a3561ff312c0dca3398dfbc8418592b43f7a92c695f67edde2dabfc0191. Apr 13 19:24:13.706316 systemd[1]: Started cri-containerd-bdfff695e5108e379b064ff67c13e1d2663625f096e38925dd301aa35d0e69d5.scope - libcontainer container bdfff695e5108e379b064ff67c13e1d2663625f096e38925dd301aa35d0e69d5. Apr 13 19:24:13.738889 systemd[1]: Started cri-containerd-325d0a46a99311223f8be63b465c8d48d42ada3a97d62b693898cd34498b047f.scope - libcontainer container 325d0a46a99311223f8be63b465c8d48d42ada3a97d62b693898cd34498b047f. Apr 13 19:24:13.776831 kubelet[2772]: E0413 19:24:13.776298 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:13.842983 containerd[1924]: time="2026-04-13T19:24:13.842418692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-12,Uid:c5bdd6043a5e9b27e782f079f3d7051a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c835a3561ff312c0dca3398dfbc8418592b43f7a92c695f67edde2dabfc0191\"" Apr 13 19:24:13.863342 containerd[1924]: time="2026-04-13T19:24:13.862669237Z" level=info msg="CreateContainer within sandbox \"8c835a3561ff312c0dca3398dfbc8418592b43f7a92c695f67edde2dabfc0191\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:24:13.867533 containerd[1924]: time="2026-04-13T19:24:13.867203327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-12,Uid:0fc285bbd6314585ecdf0ddab3965583,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfff695e5108e379b064ff67c13e1d2663625f096e38925dd301aa35d0e69d5\"" Apr 13 19:24:13.881170 containerd[1924]: time="2026-04-13T19:24:13.878703394Z" level=info msg="CreateContainer within sandbox \"bdfff695e5108e379b064ff67c13e1d2663625f096e38925dd301aa35d0e69d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:24:13.883828 containerd[1924]: time="2026-04-13T19:24:13.883333340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-12,Uid:ccb70822fd7bd771664548fdb26c93e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"325d0a46a99311223f8be63b465c8d48d42ada3a97d62b693898cd34498b047f\"" Apr 13 19:24:13.885249 kubelet[2772]: I0413 19:24:13.884920 2772 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Apr 13 19:24:13.887223 kubelet[2772]: E0413 19:24:13.886969 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Apr 13 19:24:13.897560 containerd[1924]: time="2026-04-13T19:24:13.897443391Z" level=info msg="CreateContainer within sandbox \"325d0a46a99311223f8be63b465c8d48d42ada3a97d62b693898cd34498b047f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:24:13.905376 containerd[1924]: time="2026-04-13T19:24:13.905222931Z" level=info msg="CreateContainer within sandbox \"8c835a3561ff312c0dca3398dfbc8418592b43f7a92c695f67edde2dabfc0191\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f59720c32c15f0101ba4e47aa95b37974bcc06accf6f89b7d8657b0e81670d1f\"" Apr 13 19:24:13.907860 containerd[1924]: time="2026-04-13T19:24:13.906545644Z" level=info msg="StartContainer for \"f59720c32c15f0101ba4e47aa95b37974bcc06accf6f89b7d8657b0e81670d1f\"" Apr 13 19:24:13.908920 containerd[1924]: time="2026-04-13T19:24:13.908839319Z" level=info msg="CreateContainer within sandbox \"bdfff695e5108e379b064ff67c13e1d2663625f096e38925dd301aa35d0e69d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"882489afc8001cc5ec22a7aeb6d20bbe1834d16b0f09f87d53d7ce46037e27ef\"" Apr 13 19:24:13.909827 containerd[1924]: time="2026-04-13T19:24:13.909722984Z" level=info msg="StartContainer for \"882489afc8001cc5ec22a7aeb6d20bbe1834d16b0f09f87d53d7ce46037e27ef\"" Apr 13 19:24:13.929289 containerd[1924]: time="2026-04-13T19:24:13.929074543Z" level=info msg="CreateContainer within sandbox \"325d0a46a99311223f8be63b465c8d48d42ada3a97d62b693898cd34498b047f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25173980d6c125282e6920d2e5d7993b0d8ace7d3eb5ff580370a369c48260cc\"" Apr 13 19:24:13.931772 containerd[1924]: time="2026-04-13T19:24:13.930919565Z" level=info msg="StartContainer for \"25173980d6c125282e6920d2e5d7993b0d8ace7d3eb5ff580370a369c48260cc\"" Apr 13 19:24:13.986673 systemd[1]: Started cri-containerd-f59720c32c15f0101ba4e47aa95b37974bcc06accf6f89b7d8657b0e81670d1f.scope - libcontainer container f59720c32c15f0101ba4e47aa95b37974bcc06accf6f89b7d8657b0e81670d1f. Apr 13 19:24:14.025064 systemd[1]: Started cri-containerd-25173980d6c125282e6920d2e5d7993b0d8ace7d3eb5ff580370a369c48260cc.scope - libcontainer container 25173980d6c125282e6920d2e5d7993b0d8ace7d3eb5ff580370a369c48260cc. Apr 13 19:24:14.028990 systemd[1]: Started cri-containerd-882489afc8001cc5ec22a7aeb6d20bbe1834d16b0f09f87d53d7ce46037e27ef.scope - libcontainer container 882489afc8001cc5ec22a7aeb6d20bbe1834d16b0f09f87d53d7ce46037e27ef. Apr 13 19:24:14.126062 containerd[1924]: time="2026-04-13T19:24:14.125904031Z" level=info msg="StartContainer for \"f59720c32c15f0101ba4e47aa95b37974bcc06accf6f89b7d8657b0e81670d1f\" returns successfully" Apr 13 19:24:14.199422 containerd[1924]: time="2026-04-13T19:24:14.199101051Z" level=info msg="StartContainer for \"882489afc8001cc5ec22a7aeb6d20bbe1834d16b0f09f87d53d7ce46037e27ef\" returns successfully" Apr 13 19:24:14.213831 containerd[1924]: time="2026-04-13T19:24:14.213121931Z" level=info msg="StartContainer for \"25173980d6c125282e6920d2e5d7993b0d8ace7d3eb5ff580370a369c48260cc\" returns successfully" Apr 13 19:24:14.330139 kubelet[2772]: E0413 19:24:14.330049 2772 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:14.357329 kubelet[2772]: E0413 19:24:14.357010 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:14.363085 kubelet[2772]: E0413 19:24:14.361560 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:14.376243 kubelet[2772]: E0413 19:24:14.373554 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:14.483726 update_engine[1908]: I20260413 19:24:14.483506 1908 update_attempter.cc:509] Updating boot flags... Apr 13 19:24:14.610376 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3064) Apr 13 19:24:15.130823 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3069) Apr 13 19:24:15.387929 kubelet[2772]: E0413 19:24:15.386361 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:15.392452 kubelet[2772]: E0413 19:24:15.391388 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:15.499121 kubelet[2772]: I0413 19:24:15.497528 2772 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Apr 13 19:24:16.614546 kubelet[2772]: E0413 19:24:16.614485 2772 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Apr 13 19:24:19.183044 kubelet[2772]: I0413 19:24:19.182700 2772 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-12" Apr 13 19:24:19.207647 kubelet[2772]: I0413 19:24:19.207313 2772 apiserver.go:52] "Watching apiserver" Apr 13 19:24:19.248976 kubelet[2772]: I0413 19:24:19.247927 2772 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:19.339195 kubelet[2772]: I0413 19:24:19.339148 2772 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:24:19.425553 kubelet[2772]: E0413 19:24:19.425485 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Apr 13 19:24:19.428523 kubelet[2772]: E0413 19:24:19.428464 2772 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:19.428523 kubelet[2772]: I0413 19:24:19.428514 2772 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:19.446144 kubelet[2772]: E0413 19:24:19.445971 2772 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:19.446144 kubelet[2772]: I0413 19:24:19.446028 2772 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:19.467522 kubelet[2772]: E0413 19:24:19.467437 2772 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:21.400405 kubelet[2772]: I0413 19:24:21.400341 2772 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:22.379192 systemd[1]: Reloading requested from client PID 3239 ('systemctl') (unit session-7.scope)... Apr 13 19:24:22.379709 systemd[1]: Reloading... Apr 13 19:24:22.604787 zram_generator::config[3288]: No configuration found. Apr 13 19:24:22.613047 kubelet[2772]: I0413 19:24:22.612972 2772 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:22.625582 kubelet[2772]: I0413 19:24:22.625472 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-12" podStartSLOduration=1.6254489730000001 podStartE2EDuration="1.625448973s" podCreationTimestamp="2026-04-13 19:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:22.390899864 +0000 UTC m=+12.867696179" watchObservedRunningTime="2026-04-13 19:24:22.625448973 +0000 UTC m=+13.102245096" Apr 13 19:24:22.878363 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:23.131635 systemd[1]: Reloading finished in 751 ms. Apr 13 19:24:23.244266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:23.267560 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:24:23.268294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:23.268516 systemd[1]: kubelet.service: Consumed 3.657s CPU time, 127.3M memory peak, 0B memory swap peak. Apr 13 19:24:23.276438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:23.641061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:23.652453 (kubelet)[3342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:23.755063 kubelet[3342]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:23.755063 kubelet[3342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:23.755063 kubelet[3342]: I0413 19:24:23.754621 3342 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:23.772601 kubelet[3342]: I0413 19:24:23.772545 3342 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:24:23.775069 kubelet[3342]: I0413 19:24:23.772830 3342 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:23.775069 kubelet[3342]: I0413 19:24:23.772893 3342 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:24:23.775069 kubelet[3342]: I0413 19:24:23.772914 3342 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:23.775069 kubelet[3342]: I0413 19:24:23.773292 3342 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:23.782530 kubelet[3342]: I0413 19:24:23.782452 3342 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:24:23.787797 kubelet[3342]: I0413 19:24:23.787658 3342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:23.794411 kubelet[3342]: E0413 19:24:23.794348 3342 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:23.794600 kubelet[3342]: I0413 19:24:23.794444 3342 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:23.802166 kubelet[3342]: I0413 19:24:23.802117 3342 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:24:23.802577 kubelet[3342]: I0413 19:24:23.802518 3342 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:23.803243 kubelet[3342]: I0413 19:24:23.802571 3342 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:24:23.803243 kubelet[3342]: I0413 19:24:23.802904 3342 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:23.803243 kubelet[3342]: I0413 19:24:23.802928 3342 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:24:23.803243 kubelet[3342]: I0413 19:24:23.802974 3342 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:24:23.806035 kubelet[3342]: I0413 19:24:23.803311 3342 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:23.806035 kubelet[3342]: I0413 19:24:23.803547 3342 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:24:23.806035 kubelet[3342]: I0413 19:24:23.803582 3342 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:23.806035 kubelet[3342]: I0413 19:24:23.803629 3342 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:24:23.806035 kubelet[3342]: I0413 19:24:23.803658 3342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:23.808838 kubelet[3342]: I0413 19:24:23.806835 3342 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:23.808838 kubelet[3342]: I0413 19:24:23.807831 3342 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:23.808838 kubelet[3342]: I0413 19:24:23.807893 3342 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:24:23.812320 kubelet[3342]: I0413 19:24:23.812277 3342 server.go:1262] "Started kubelet" Apr 13 19:24:23.819142 kubelet[3342]: I0413 19:24:23.819104 3342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:23.827923 kubelet[3342]: I0413 19:24:23.827861 3342 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:23.830677 kubelet[3342]: I0413 19:24:23.830592 3342 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:23.830988 kubelet[3342]: I0413 19:24:23.830958 3342 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:24:23.831545 kubelet[3342]: I0413 19:24:23.831506 3342 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:23.838724 kubelet[3342]: I0413 19:24:23.838676 3342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:23.854038 kubelet[3342]: I0413 19:24:23.853983 3342 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:24:23.855074 kubelet[3342]: E0413 19:24:23.855034 3342 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Apr 13 19:24:23.890547 kubelet[3342]: I0413 19:24:23.869677 3342 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:24:23.890710 kubelet[3342]: I0413 19:24:23.870043 3342 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:24:23.890826 kubelet[3342]: I0413 19:24:23.878359 3342 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:24:23.894442 kubelet[3342]: I0413 19:24:23.882204 3342 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:23.894857 kubelet[3342]: I0413 19:24:23.894813 3342 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:23.923152 kubelet[3342]: I0413 19:24:23.923091 3342 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:23.933242 kubelet[3342]: I0413 19:24:23.932164 3342 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:23.933242 kubelet[3342]: I0413 19:24:23.932210 3342 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:24:23.933242 kubelet[3342]: I0413 19:24:23.932257 3342 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:24:23.933242 kubelet[3342]: E0413 19:24:23.932337 3342 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:23.965997 kubelet[3342]: E0413 19:24:23.965262 3342 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:23.966879 kubelet[3342]: I0413 19:24:23.966812 3342 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:24.032503 kubelet[3342]: E0413 19:24:24.032446 3342 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.128845 3342 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.128884 3342 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.128923 3342 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.129156 3342 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.129177 3342 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.129237 3342 policy_none.go:49] "None policy: Start" Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.129258 3342 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:24:24.129350 kubelet[3342]: I0413 19:24:24.129282 3342 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:24:24.133112 kubelet[3342]: I0413 19:24:24.130194 3342 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 19:24:24.133112 kubelet[3342]: I0413 19:24:24.130226 3342 policy_none.go:47] "Start" Apr 13 19:24:24.154785 kubelet[3342]: E0413 19:24:24.153709 3342 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:24.158521 kubelet[3342]: I0413 19:24:24.157230 3342 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:24.158521 kubelet[3342]: I0413 19:24:24.157601 3342 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:24.161721 kubelet[3342]: I0413 19:24:24.160640 3342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:24.164708 kubelet[3342]: E0413 19:24:24.162470 3342 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:24.234777 kubelet[3342]: I0413 19:24:24.233923 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:24.235169 kubelet[3342]: I0413 19:24:24.234108 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:24.235302 kubelet[3342]: I0413 19:24:24.234357 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:24.256800 kubelet[3342]: E0413 19:24:24.256540 3342 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-12\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:24.256800 kubelet[3342]: E0413 19:24:24.256703 3342 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-12\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:24.294508 kubelet[3342]: I0413 19:24:24.294217 3342 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Apr 13 19:24:24.297552 kubelet[3342]: I0413 19:24:24.297501 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:24.297802 kubelet[3342]: I0413 19:24:24.297774 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:24.298410 kubelet[3342]: I0413 19:24:24.297989 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:24.298410 kubelet[3342]: I0413 19:24:24.298051 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:24.298410 kubelet[3342]: I0413 19:24:24.298093 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccb70822fd7bd771664548fdb26c93e2-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-12\" (UID: \"ccb70822fd7bd771664548fdb26c93e2\") " pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:24.298410 kubelet[3342]: I0413 19:24:24.298129 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5bdd6043a5e9b27e782f079f3d7051a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"c5bdd6043a5e9b27e782f079f3d7051a\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:24.298410 kubelet[3342]: I0413 19:24:24.298165 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5bdd6043a5e9b27e782f079f3d7051a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"c5bdd6043a5e9b27e782f079f3d7051a\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:24.298705 kubelet[3342]: I0413 19:24:24.298217 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5bdd6043a5e9b27e782f079f3d7051a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"c5bdd6043a5e9b27e782f079f3d7051a\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:24.298705 kubelet[3342]: I0413 19:24:24.298297 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fc285bbd6314585ecdf0ddab3965583-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"0fc285bbd6314585ecdf0ddab3965583\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Apr 13 19:24:24.322286 kubelet[3342]: I0413 19:24:24.321213 3342 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-12" Apr 13 19:24:24.322286 kubelet[3342]: I0413 19:24:24.321344 3342 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-12" Apr 13 19:24:24.805049 kubelet[3342]: I0413 19:24:24.804985 3342 apiserver.go:52] "Watching apiserver" Apr 13 19:24:24.840308 kubelet[3342]: I0413 19:24:24.840078 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-12" podStartSLOduration=2.840056508 podStartE2EDuration="2.840056508s" podCreationTimestamp="2026-04-13 19:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:24.83961318 +0000 UTC m=+1.176318787" watchObservedRunningTime="2026-04-13 19:24:24.840056508 +0000 UTC m=+1.176762103" Apr 13 19:24:24.891164 kubelet[3342]: I0413 19:24:24.891094 3342 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:24:24.975168 kubelet[3342]: I0413 19:24:24.975069 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-12" podStartSLOduration=0.975050184 podStartE2EDuration="975.050184ms" podCreationTimestamp="2026-04-13 19:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:24.89418252 +0000 UTC m=+1.230888103" watchObservedRunningTime="2026-04-13 19:24:24.975050184 +0000 UTC m=+1.311755767" Apr 13 19:24:25.047058 kubelet[3342]: I0413 19:24:25.046650 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:25.047058 kubelet[3342]: I0413 19:24:25.046765 3342 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:25.072127 kubelet[3342]: E0413 19:24:25.071964 3342 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-12\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-12" Apr 13 19:24:25.072830 kubelet[3342]: E0413 19:24:25.072329 3342 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-12\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-12" Apr 13 19:24:29.268109 kubelet[3342]: I0413 19:24:29.267698 3342 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:24:29.269046 containerd[1924]: time="2026-04-13T19:24:29.268423874Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:24:29.269637 kubelet[3342]: I0413 19:24:29.269134 3342 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:24:29.996949 systemd[1]: Created slice kubepods-besteffort-podb9d6698e_2960_4df1_b12a_c2ef2a57e1e0.slice - libcontainer container kubepods-besteffort-podb9d6698e_2960_4df1_b12a_c2ef2a57e1e0.slice. Apr 13 19:24:30.039165 kubelet[3342]: I0413 19:24:30.038975 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr4qc\" (UniqueName: \"kubernetes.io/projected/b9d6698e-2960-4df1-b12a-c2ef2a57e1e0-kube-api-access-dr4qc\") pod \"kube-proxy-lr2l9\" (UID: \"b9d6698e-2960-4df1-b12a-c2ef2a57e1e0\") " pod="kube-system/kube-proxy-lr2l9" Apr 13 19:24:30.039339 kubelet[3342]: I0413 19:24:30.039300 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9d6698e-2960-4df1-b12a-c2ef2a57e1e0-kube-proxy\") pod \"kube-proxy-lr2l9\" (UID: \"b9d6698e-2960-4df1-b12a-c2ef2a57e1e0\") " pod="kube-system/kube-proxy-lr2l9" Apr 13 19:24:30.039406 kubelet[3342]: I0413 19:24:30.039350 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9d6698e-2960-4df1-b12a-c2ef2a57e1e0-xtables-lock\") pod \"kube-proxy-lr2l9\" (UID: \"b9d6698e-2960-4df1-b12a-c2ef2a57e1e0\") " pod="kube-system/kube-proxy-lr2l9" Apr 13 19:24:30.039557 kubelet[3342]: I0413 19:24:30.039511 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9d6698e-2960-4df1-b12a-c2ef2a57e1e0-lib-modules\") pod \"kube-proxy-lr2l9\" (UID: \"b9d6698e-2960-4df1-b12a-c2ef2a57e1e0\") " pod="kube-system/kube-proxy-lr2l9" Apr 13 19:24:30.314395 containerd[1924]: time="2026-04-13T19:24:30.313040307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr2l9,Uid:b9d6698e-2960-4df1-b12a-c2ef2a57e1e0,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:30.391745 containerd[1924]: time="2026-04-13T19:24:30.391541019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:30.391745 containerd[1924]: time="2026-04-13T19:24:30.391668051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:30.392045 containerd[1924]: time="2026-04-13T19:24:30.391720755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:30.392184 containerd[1924]: time="2026-04-13T19:24:30.392109567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:30.468520 systemd[1]: Started cri-containerd-d3cca94bc985303c27480dce9107507fa2e49aa8bd4f51eab693f909a7263a15.scope - libcontainer container d3cca94bc985303c27480dce9107507fa2e49aa8bd4f51eab693f909a7263a15. Apr 13 19:24:30.472656 systemd[1]: Created slice kubepods-besteffort-podd4ff4087_2fa2_44fa_9247_a692739a64d3.slice - libcontainer container kubepods-besteffort-podd4ff4087_2fa2_44fa_9247_a692739a64d3.slice. Apr 13 19:24:30.545504 kubelet[3342]: I0413 19:24:30.545306 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d4ff4087-2fa2-44fa-9247-a692739a64d3-var-lib-calico\") pod \"tigera-operator-5588576f44-mp8xl\" (UID: \"d4ff4087-2fa2-44fa-9247-a692739a64d3\") " pod="tigera-operator/tigera-operator-5588576f44-mp8xl" Apr 13 19:24:30.547132 kubelet[3342]: I0413 19:24:30.546596 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh72x\" (UniqueName: \"kubernetes.io/projected/d4ff4087-2fa2-44fa-9247-a692739a64d3-kube-api-access-hh72x\") pod \"tigera-operator-5588576f44-mp8xl\" (UID: \"d4ff4087-2fa2-44fa-9247-a692739a64d3\") " pod="tigera-operator/tigera-operator-5588576f44-mp8xl" Apr 13 19:24:30.548161 containerd[1924]: time="2026-04-13T19:24:30.547546000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr2l9,Uid:b9d6698e-2960-4df1-b12a-c2ef2a57e1e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3cca94bc985303c27480dce9107507fa2e49aa8bd4f51eab693f909a7263a15\"" Apr 13 19:24:30.559078 containerd[1924]: time="2026-04-13T19:24:30.558650284Z" level=info msg="CreateContainer within sandbox \"d3cca94bc985303c27480dce9107507fa2e49aa8bd4f51eab693f909a7263a15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:24:30.597708 containerd[1924]: time="2026-04-13T19:24:30.597511720Z" level=info msg="CreateContainer within sandbox \"d3cca94bc985303c27480dce9107507fa2e49aa8bd4f51eab693f909a7263a15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c7eb3d19fab6f85b8d64046927ba1cf7861bde0fa65b117e279033655da2444\"" Apr 13 19:24:30.600182 containerd[1924]: time="2026-04-13T19:24:30.600082612Z" level=info msg="StartContainer for \"5c7eb3d19fab6f85b8d64046927ba1cf7861bde0fa65b117e279033655da2444\"" Apr 13 19:24:30.652279 systemd[1]: Started cri-containerd-5c7eb3d19fab6f85b8d64046927ba1cf7861bde0fa65b117e279033655da2444.scope - libcontainer container 5c7eb3d19fab6f85b8d64046927ba1cf7861bde0fa65b117e279033655da2444. Apr 13 19:24:30.712291 containerd[1924]: time="2026-04-13T19:24:30.712196561Z" level=info msg="StartContainer for \"5c7eb3d19fab6f85b8d64046927ba1cf7861bde0fa65b117e279033655da2444\" returns successfully" Apr 13 19:24:30.793642 containerd[1924]: time="2026-04-13T19:24:30.793553993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-mp8xl,Uid:d4ff4087-2fa2-44fa-9247-a692739a64d3,Namespace:tigera-operator,Attempt:0,}" Apr 13 19:24:30.842921 containerd[1924]: time="2026-04-13T19:24:30.840655446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:30.842921 containerd[1924]: time="2026-04-13T19:24:30.842830458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:30.843127 containerd[1924]: time="2026-04-13T19:24:30.842990106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:30.843685 containerd[1924]: time="2026-04-13T19:24:30.843585510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:30.887049 systemd[1]: Started cri-containerd-4fb8290e4752a2746850a3f0600c0ea623bfd40061b28867518c41e5f4b7175a.scope - libcontainer container 4fb8290e4752a2746850a3f0600c0ea623bfd40061b28867518c41e5f4b7175a. Apr 13 19:24:30.965056 containerd[1924]: time="2026-04-13T19:24:30.964958574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-mp8xl,Uid:d4ff4087-2fa2-44fa-9247-a692739a64d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4fb8290e4752a2746850a3f0600c0ea623bfd40061b28867518c41e5f4b7175a\"" Apr 13 19:24:30.968685 containerd[1924]: time="2026-04-13T19:24:30.968626662Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 19:24:32.232182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1335821340.mount: Deactivated successfully. Apr 13 19:24:33.561072 containerd[1924]: time="2026-04-13T19:24:33.561003799Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:33.562863 containerd[1924]: time="2026-04-13T19:24:33.562766347Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 19:24:33.564774 containerd[1924]: time="2026-04-13T19:24:33.564291991Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:33.571767 containerd[1924]: time="2026-04-13T19:24:33.570450175Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:33.572533 containerd[1924]: time="2026-04-13T19:24:33.572487391Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.603792653s" Apr 13 19:24:33.572682 containerd[1924]: time="2026-04-13T19:24:33.572652523Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 19:24:33.580253 containerd[1924]: time="2026-04-13T19:24:33.580182331Z" level=info msg="CreateContainer within sandbox \"4fb8290e4752a2746850a3f0600c0ea623bfd40061b28867518c41e5f4b7175a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 19:24:33.607232 containerd[1924]: time="2026-04-13T19:24:33.607156159Z" level=info msg="CreateContainer within sandbox \"4fb8290e4752a2746850a3f0600c0ea623bfd40061b28867518c41e5f4b7175a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e4ba8fe36fbbc0ff79f2559535d8efc200461aac00135cee1ce53d7b557ecbd0\"" Apr 13 19:24:33.608268 containerd[1924]: time="2026-04-13T19:24:33.608209255Z" level=info msg="StartContainer for \"e4ba8fe36fbbc0ff79f2559535d8efc200461aac00135cee1ce53d7b557ecbd0\"" Apr 13 19:24:33.662096 systemd[1]: Started cri-containerd-e4ba8fe36fbbc0ff79f2559535d8efc200461aac00135cee1ce53d7b557ecbd0.scope - libcontainer container e4ba8fe36fbbc0ff79f2559535d8efc200461aac00135cee1ce53d7b557ecbd0. Apr 13 19:24:33.712910 containerd[1924]: time="2026-04-13T19:24:33.712623668Z" level=info msg="StartContainer for \"e4ba8fe36fbbc0ff79f2559535d8efc200461aac00135cee1ce53d7b557ecbd0\" returns successfully" Apr 13 19:24:33.952600 kubelet[3342]: I0413 19:24:33.951815 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lr2l9" podStartSLOduration=4.951794469 podStartE2EDuration="4.951794469s" podCreationTimestamp="2026-04-13 19:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:31.138894087 +0000 UTC m=+7.475599682" watchObservedRunningTime="2026-04-13 19:24:33.951794469 +0000 UTC m=+10.288500040" Apr 13 19:24:34.104652 kubelet[3342]: I0413 19:24:34.104548 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-mp8xl" podStartSLOduration=1.4976008969999999 podStartE2EDuration="4.104529558s" podCreationTimestamp="2026-04-13 19:24:30 +0000 UTC" firstStartedPulling="2026-04-13 19:24:30.967549854 +0000 UTC m=+7.304255425" lastFinishedPulling="2026-04-13 19:24:33.574478527 +0000 UTC m=+9.911184086" observedRunningTime="2026-04-13 19:24:34.102559506 +0000 UTC m=+10.439265113" watchObservedRunningTime="2026-04-13 19:24:34.104529558 +0000 UTC m=+10.441235117" Apr 13 19:24:40.820260 sudo[2246]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:40.976151 sshd[2243]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:40.985711 systemd[1]: sshd@6-172.31.19.12:22-4.175.71.9:33052.service: Deactivated successfully. Apr 13 19:24:40.993378 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:24:40.996315 systemd[1]: session-7.scope: Consumed 13.276s CPU time, 152.0M memory peak, 0B memory swap peak. Apr 13 19:24:41.000459 systemd-logind[1907]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:24:41.003007 systemd-logind[1907]: Removed session 7. Apr 13 19:24:52.418261 systemd[1]: Created slice kubepods-besteffort-pod299a84cf_78d4_4b47_9783_7c0d1ab3f752.slice - libcontainer container kubepods-besteffort-pod299a84cf_78d4_4b47_9783_7c0d1ab3f752.slice. Apr 13 19:24:52.496204 kubelet[3342]: I0413 19:24:52.496085 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/299a84cf-78d4-4b47-9783-7c0d1ab3f752-typha-certs\") pod \"calico-typha-658965dd74-trdns\" (UID: \"299a84cf-78d4-4b47-9783-7c0d1ab3f752\") " pod="calico-system/calico-typha-658965dd74-trdns" Apr 13 19:24:52.496204 kubelet[3342]: I0413 19:24:52.496163 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/299a84cf-78d4-4b47-9783-7c0d1ab3f752-tigera-ca-bundle\") pod \"calico-typha-658965dd74-trdns\" (UID: \"299a84cf-78d4-4b47-9783-7c0d1ab3f752\") " pod="calico-system/calico-typha-658965dd74-trdns" Apr 13 19:24:52.496204 kubelet[3342]: I0413 19:24:52.496204 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpjl\" (UniqueName: \"kubernetes.io/projected/299a84cf-78d4-4b47-9783-7c0d1ab3f752-kube-api-access-jgpjl\") pod \"calico-typha-658965dd74-trdns\" (UID: \"299a84cf-78d4-4b47-9783-7c0d1ab3f752\") " pod="calico-system/calico-typha-658965dd74-trdns" Apr 13 19:24:52.598061 kubelet[3342]: I0413 19:24:52.597833 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-cni-log-dir\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.598061 kubelet[3342]: I0413 19:24:52.597928 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-cni-net-dir\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.598061 kubelet[3342]: I0413 19:24:52.597969 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-flexvol-driver-host\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.598061 kubelet[3342]: I0413 19:24:52.598006 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-sys-fs\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.598637 kubelet[3342]: I0413 19:24:52.598448 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-var-lib-calico\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.598637 kubelet[3342]: I0413 19:24:52.598542 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-var-run-calico\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.598637 kubelet[3342]: I0413 19:24:52.598592 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7kpz\" (UniqueName: \"kubernetes.io/projected/c7f2c515-6ae7-43a0-aad8-d7045f10875a-kube-api-access-c7kpz\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.600989 kubelet[3342]: I0413 19:24:52.599858 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-nodeproc\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.600989 kubelet[3342]: I0413 19:24:52.599912 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-xtables-lock\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.600989 kubelet[3342]: I0413 19:24:52.599960 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-lib-modules\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.600989 kubelet[3342]: I0413 19:24:52.600022 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-bpffs\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.600989 kubelet[3342]: I0413 19:24:52.600093 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-cni-bin-dir\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.600989 kubelet[3342]: I0413 19:24:52.600132 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c7f2c515-6ae7-43a0-aad8-d7045f10875a-node-certs\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.600668 systemd[1]: Created slice kubepods-besteffort-podc7f2c515_6ae7_43a0_aad8_d7045f10875a.slice - libcontainer container kubepods-besteffort-podc7f2c515_6ae7_43a0_aad8_d7045f10875a.slice. Apr 13 19:24:52.601550 kubelet[3342]: I0413 19:24:52.600171 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c7f2c515-6ae7-43a0-aad8-d7045f10875a-policysync\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.601550 kubelet[3342]: I0413 19:24:52.600210 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f2c515-6ae7-43a0-aad8-d7045f10875a-tigera-ca-bundle\") pod \"calico-node-tzgn7\" (UID: \"c7f2c515-6ae7-43a0-aad8-d7045f10875a\") " pod="calico-system/calico-node-tzgn7" Apr 13 19:24:52.711004 kubelet[3342]: E0413 19:24:52.710610 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.711004 kubelet[3342]: W0413 19:24:52.710686 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.711004 kubelet[3342]: E0413 19:24:52.710816 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.718051 kubelet[3342]: E0413 19:24:52.717929 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.718051 kubelet[3342]: W0413 19:24:52.717985 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.719012 kubelet[3342]: E0413 19:24:52.718017 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.732890 containerd[1924]: time="2026-04-13T19:24:52.732646574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658965dd74-trdns,Uid:299a84cf-78d4-4b47-9783-7c0d1ab3f752,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:52.753649 kubelet[3342]: E0413 19:24:52.753426 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.753649 kubelet[3342]: W0413 19:24:52.753494 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.753649 kubelet[3342]: E0413 19:24:52.753567 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.757223 kubelet[3342]: E0413 19:24:52.756268 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:24:52.798236 kubelet[3342]: E0413 19:24:52.797689 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.799831 kubelet[3342]: W0413 19:24:52.797743 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.799992 kubelet[3342]: E0413 19:24:52.799835 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.802097 kubelet[3342]: E0413 19:24:52.801550 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.804220 kubelet[3342]: W0413 19:24:52.801589 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.804220 kubelet[3342]: E0413 19:24:52.802841 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.805815 kubelet[3342]: E0413 19:24:52.805776 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.807855 kubelet[3342]: W0413 19:24:52.807003 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.807855 kubelet[3342]: E0413 19:24:52.807062 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.810329 kubelet[3342]: E0413 19:24:52.809722 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.810329 kubelet[3342]: W0413 19:24:52.809822 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.810329 kubelet[3342]: E0413 19:24:52.809855 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.815164 kubelet[3342]: E0413 19:24:52.814886 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.815164 kubelet[3342]: W0413 19:24:52.814926 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.815164 kubelet[3342]: E0413 19:24:52.814958 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.816678 kubelet[3342]: E0413 19:24:52.815802 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.816678 kubelet[3342]: W0413 19:24:52.815833 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.816678 kubelet[3342]: E0413 19:24:52.815862 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.819844 kubelet[3342]: E0413 19:24:52.818319 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.819844 kubelet[3342]: W0413 19:24:52.818356 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.819844 kubelet[3342]: E0413 19:24:52.818388 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.822179 kubelet[3342]: E0413 19:24:52.820713 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.822179 kubelet[3342]: W0413 19:24:52.820777 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.822179 kubelet[3342]: E0413 19:24:52.820811 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.824539 containerd[1924]: time="2026-04-13T19:24:52.820337727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:52.824539 containerd[1924]: time="2026-04-13T19:24:52.820448655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:52.824539 containerd[1924]: time="2026-04-13T19:24:52.820486755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:52.824539 containerd[1924]: time="2026-04-13T19:24:52.820677447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:52.824965 kubelet[3342]: E0413 19:24:52.824051 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.824965 kubelet[3342]: W0413 19:24:52.824084 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.824965 kubelet[3342]: E0413 19:24:52.824117 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.828898 kubelet[3342]: E0413 19:24:52.826860 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.828898 kubelet[3342]: W0413 19:24:52.826897 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.828898 kubelet[3342]: E0413 19:24:52.826932 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.829344 kubelet[3342]: E0413 19:24:52.829310 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.830326 kubelet[3342]: W0413 19:24:52.829578 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.830326 kubelet[3342]: E0413 19:24:52.829629 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.834091 kubelet[3342]: E0413 19:24:52.832510 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.834091 kubelet[3342]: W0413 19:24:52.832548 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.834091 kubelet[3342]: E0413 19:24:52.832581 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.835874 kubelet[3342]: E0413 19:24:52.834614 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.835874 kubelet[3342]: W0413 19:24:52.834643 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.835874 kubelet[3342]: E0413 19:24:52.834673 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.836275 kubelet[3342]: E0413 19:24:52.836222 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.836775 kubelet[3342]: W0413 19:24:52.836517 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.837455 kubelet[3342]: E0413 19:24:52.837253 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.840696 kubelet[3342]: E0413 19:24:52.839891 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.841029 kubelet[3342]: W0413 19:24:52.840992 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.841160 kubelet[3342]: E0413 19:24:52.841134 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.846337 kubelet[3342]: E0413 19:24:52.845994 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.846337 kubelet[3342]: W0413 19:24:52.846029 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.846337 kubelet[3342]: E0413 19:24:52.846061 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.848779 kubelet[3342]: E0413 19:24:52.847576 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.848779 kubelet[3342]: W0413 19:24:52.847607 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.848779 kubelet[3342]: E0413 19:24:52.847638 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.850358 kubelet[3342]: E0413 19:24:52.849409 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.850358 kubelet[3342]: W0413 19:24:52.849440 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.850358 kubelet[3342]: E0413 19:24:52.849473 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.851840 kubelet[3342]: E0413 19:24:52.851537 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.853147 kubelet[3342]: W0413 19:24:52.852317 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.853147 kubelet[3342]: E0413 19:24:52.852368 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.854278 kubelet[3342]: E0413 19:24:52.854243 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.855968 kubelet[3342]: W0413 19:24:52.854694 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.855968 kubelet[3342]: E0413 19:24:52.854773 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.858107 kubelet[3342]: E0413 19:24:52.858070 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.859925 kubelet[3342]: W0413 19:24:52.858784 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.859925 kubelet[3342]: E0413 19:24:52.858833 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.859925 kubelet[3342]: I0413 19:24:52.858896 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5bbd3846-92e6-469c-993d-c2ef707609bb-varrun\") pod \"csi-node-driver-nzt8j\" (UID: \"5bbd3846-92e6-469c-993d-c2ef707609bb\") " pod="calico-system/csi-node-driver-nzt8j" Apr 13 19:24:52.863155 kubelet[3342]: E0413 19:24:52.863107 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.863382 kubelet[3342]: W0413 19:24:52.863349 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.863907 kubelet[3342]: E0413 19:24:52.863511 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.863907 kubelet[3342]: I0413 19:24:52.863564 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bbd3846-92e6-469c-993d-c2ef707609bb-kubelet-dir\") pod \"csi-node-driver-nzt8j\" (UID: \"5bbd3846-92e6-469c-993d-c2ef707609bb\") " pod="calico-system/csi-node-driver-nzt8j" Apr 13 19:24:52.864791 kubelet[3342]: E0413 19:24:52.864463 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.864791 kubelet[3342]: W0413 19:24:52.864496 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.864791 kubelet[3342]: E0413 19:24:52.864528 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.864791 kubelet[3342]: I0413 19:24:52.864567 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5bbd3846-92e6-469c-993d-c2ef707609bb-registration-dir\") pod \"csi-node-driver-nzt8j\" (UID: \"5bbd3846-92e6-469c-993d-c2ef707609bb\") " pod="calico-system/csi-node-driver-nzt8j" Apr 13 19:24:52.865065 systemd[1]: Started cri-containerd-c622478843717d1e2b562b4ac52f701e93e1674dbd544d3a11d070b208c75b09.scope - libcontainer container c622478843717d1e2b562b4ac52f701e93e1674dbd544d3a11d070b208c75b09. Apr 13 19:24:52.865583 kubelet[3342]: E0413 19:24:52.865552 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.865725 kubelet[3342]: W0413 19:24:52.865698 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.865887 kubelet[3342]: E0413 19:24:52.865862 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.866032 kubelet[3342]: I0413 19:24:52.866008 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5bbd3846-92e6-469c-993d-c2ef707609bb-socket-dir\") pod \"csi-node-driver-nzt8j\" (UID: \"5bbd3846-92e6-469c-993d-c2ef707609bb\") " pod="calico-system/csi-node-driver-nzt8j" Apr 13 19:24:52.869133 kubelet[3342]: E0413 19:24:52.869094 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.869383 kubelet[3342]: W0413 19:24:52.869328 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.870903 kubelet[3342]: E0413 19:24:52.870867 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.871078 kubelet[3342]: I0413 19:24:52.871052 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g72g\" (UniqueName: \"kubernetes.io/projected/5bbd3846-92e6-469c-993d-c2ef707609bb-kube-api-access-7g72g\") pod \"csi-node-driver-nzt8j\" (UID: \"5bbd3846-92e6-469c-993d-c2ef707609bb\") " pod="calico-system/csi-node-driver-nzt8j" Apr 13 19:24:52.871653 kubelet[3342]: E0413 19:24:52.871619 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.872130 kubelet[3342]: W0413 19:24:52.871846 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.873019 kubelet[3342]: E0413 19:24:52.872431 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.874427 kubelet[3342]: E0413 19:24:52.874201 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.875118 kubelet[3342]: W0413 19:24:52.874881 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.875567 kubelet[3342]: E0413 19:24:52.875445 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.877436 kubelet[3342]: E0413 19:24:52.877300 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.878136 kubelet[3342]: W0413 19:24:52.878004 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.878883 kubelet[3342]: E0413 19:24:52.878629 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.882597 kubelet[3342]: E0413 19:24:52.882464 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.883174 kubelet[3342]: W0413 19:24:52.883042 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.883654 kubelet[3342]: E0413 19:24:52.883411 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.884410 kubelet[3342]: E0413 19:24:52.884227 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.884410 kubelet[3342]: W0413 19:24:52.884294 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.884410 kubelet[3342]: E0413 19:24:52.884355 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.885440 kubelet[3342]: E0413 19:24:52.884881 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.885440 kubelet[3342]: W0413 19:24:52.884933 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.885440 kubelet[3342]: E0413 19:24:52.884979 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.886177 kubelet[3342]: E0413 19:24:52.885705 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.886177 kubelet[3342]: W0413 19:24:52.885748 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.886177 kubelet[3342]: E0413 19:24:52.885779 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.886406 kubelet[3342]: E0413 19:24:52.886339 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.886406 kubelet[3342]: W0413 19:24:52.886362 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.886406 kubelet[3342]: E0413 19:24:52.886386 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.887070 kubelet[3342]: E0413 19:24:52.887018 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.887070 kubelet[3342]: W0413 19:24:52.887056 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.887375 kubelet[3342]: E0413 19:24:52.887086 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.887877 kubelet[3342]: E0413 19:24:52.887723 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.887877 kubelet[3342]: W0413 19:24:52.887772 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.887877 kubelet[3342]: E0413 19:24:52.887800 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.938951 containerd[1924]: time="2026-04-13T19:24:52.937627635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzgn7,Uid:c7f2c515-6ae7-43a0-aad8-d7045f10875a,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:52.974029 kubelet[3342]: E0413 19:24:52.973569 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.974029 kubelet[3342]: W0413 19:24:52.973909 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.975083 kubelet[3342]: E0413 19:24:52.973955 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.976054 kubelet[3342]: E0413 19:24:52.975903 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.976054 kubelet[3342]: W0413 19:24:52.975939 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.976054 kubelet[3342]: E0413 19:24:52.975970 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.981037 kubelet[3342]: E0413 19:24:52.980981 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.981037 kubelet[3342]: W0413 19:24:52.981024 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.981316 kubelet[3342]: E0413 19:24:52.981059 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.984362 kubelet[3342]: E0413 19:24:52.984273 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.984362 kubelet[3342]: W0413 19:24:52.984321 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.984574 kubelet[3342]: E0413 19:24:52.984369 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.986792 kubelet[3342]: E0413 19:24:52.985763 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.986792 kubelet[3342]: W0413 19:24:52.985799 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.986792 kubelet[3342]: E0413 19:24:52.985831 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.987651 kubelet[3342]: E0413 19:24:52.987382 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.987651 kubelet[3342]: W0413 19:24:52.987418 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.987651 kubelet[3342]: E0413 19:24:52.987448 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.988753 kubelet[3342]: E0413 19:24:52.988670 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.988753 kubelet[3342]: W0413 19:24:52.988708 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.988921 kubelet[3342]: E0413 19:24:52.988760 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.990089 kubelet[3342]: E0413 19:24:52.989859 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.990089 kubelet[3342]: W0413 19:24:52.989896 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.990089 kubelet[3342]: E0413 19:24:52.989928 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.992959 kubelet[3342]: E0413 19:24:52.992907 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.992959 kubelet[3342]: W0413 19:24:52.992948 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.993244 kubelet[3342]: E0413 19:24:52.992983 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.996613 kubelet[3342]: E0413 19:24:52.996442 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.996613 kubelet[3342]: W0413 19:24:52.996485 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.996613 kubelet[3342]: E0413 19:24:52.996519 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.998108 kubelet[3342]: E0413 19:24:52.998052 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.998108 kubelet[3342]: W0413 19:24:52.998089 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:52.998571 kubelet[3342]: E0413 19:24:52.998122 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:52.999529 kubelet[3342]: E0413 19:24:52.999475 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:52.999529 kubelet[3342]: W0413 19:24:52.999514 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.000542 kubelet[3342]: E0413 19:24:52.999546 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.001572 kubelet[3342]: E0413 19:24:53.001523 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.001572 kubelet[3342]: W0413 19:24:53.001564 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.003211 kubelet[3342]: E0413 19:24:53.001598 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.004616 kubelet[3342]: E0413 19:24:53.004555 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.004616 kubelet[3342]: W0413 19:24:53.004593 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.004785 kubelet[3342]: E0413 19:24:53.004624 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.007445 kubelet[3342]: E0413 19:24:53.007292 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.007445 kubelet[3342]: W0413 19:24:53.007333 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.007445 kubelet[3342]: E0413 19:24:53.007367 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.007924 kubelet[3342]: E0413 19:24:53.007815 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.007924 kubelet[3342]: W0413 19:24:53.007836 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.007924 kubelet[3342]: E0413 19:24:53.007860 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.010797 kubelet[3342]: E0413 19:24:53.009196 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.010797 kubelet[3342]: W0413 19:24:53.009235 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.010797 kubelet[3342]: E0413 19:24:53.009267 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.012150 containerd[1924]: time="2026-04-13T19:24:53.012009300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:53.012301 containerd[1924]: time="2026-04-13T19:24:53.012116112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:53.012301 containerd[1924]: time="2026-04-13T19:24:53.012155400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:53.012825 kubelet[3342]: E0413 19:24:53.012490 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.012825 kubelet[3342]: W0413 19:24:53.012527 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.012825 kubelet[3342]: E0413 19:24:53.012562 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.013038 containerd[1924]: time="2026-04-13T19:24:53.012321408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:53.013959 kubelet[3342]: E0413 19:24:53.013648 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.013959 kubelet[3342]: W0413 19:24:53.013950 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.014135 kubelet[3342]: E0413 19:24:53.013984 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.015913 kubelet[3342]: E0413 19:24:53.015861 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.015913 kubelet[3342]: W0413 19:24:53.015902 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.016164 kubelet[3342]: E0413 19:24:53.015934 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.019100 kubelet[3342]: E0413 19:24:53.019049 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.019100 kubelet[3342]: W0413 19:24:53.019087 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.019363 kubelet[3342]: E0413 19:24:53.019120 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.020348 kubelet[3342]: E0413 19:24:53.019673 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.020348 kubelet[3342]: W0413 19:24:53.019707 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.020348 kubelet[3342]: E0413 19:24:53.019771 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.021755 kubelet[3342]: E0413 19:24:53.021598 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.021755 kubelet[3342]: W0413 19:24:53.021637 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.021755 kubelet[3342]: E0413 19:24:53.021670 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.024181 kubelet[3342]: E0413 19:24:53.023799 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.024181 kubelet[3342]: W0413 19:24:53.023848 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.024181 kubelet[3342]: E0413 19:24:53.023883 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.024715 kubelet[3342]: E0413 19:24:53.024681 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.026545 kubelet[3342]: W0413 19:24:53.025208 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.026545 kubelet[3342]: E0413 19:24:53.025261 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.052098 kubelet[3342]: E0413 19:24:53.052038 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:53.052098 kubelet[3342]: W0413 19:24:53.052075 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:53.052098 kubelet[3342]: E0413 19:24:53.052106 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:53.066245 containerd[1924]: time="2026-04-13T19:24:53.066192756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658965dd74-trdns,Uid:299a84cf-78d4-4b47-9783-7c0d1ab3f752,Namespace:calico-system,Attempt:0,} returns sandbox id \"c622478843717d1e2b562b4ac52f701e93e1674dbd544d3a11d070b208c75b09\"" Apr 13 19:24:53.072222 containerd[1924]: time="2026-04-13T19:24:53.072079008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 19:24:53.073081 systemd[1]: Started cri-containerd-7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1.scope - libcontainer container 7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1. Apr 13 19:24:53.138830 containerd[1924]: time="2026-04-13T19:24:53.138712344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzgn7,Uid:c7f2c515-6ae7-43a0-aad8-d7045f10875a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\"" Apr 13 19:24:54.545180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046066176.mount: Deactivated successfully. Apr 13 19:24:54.934307 kubelet[3342]: E0413 19:24:54.934174 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:24:55.614954 containerd[1924]: time="2026-04-13T19:24:55.613979897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:55.614954 containerd[1924]: time="2026-04-13T19:24:55.614898749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 13 19:24:55.616840 containerd[1924]: time="2026-04-13T19:24:55.616716941Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:55.625440 containerd[1924]: time="2026-04-13T19:24:55.625369109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:55.627264 containerd[1924]: time="2026-04-13T19:24:55.627214673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.555066941s" Apr 13 19:24:55.627415 containerd[1924]: time="2026-04-13T19:24:55.627384605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 19:24:55.631707 containerd[1924]: time="2026-04-13T19:24:55.631646009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 19:24:55.663137 containerd[1924]: time="2026-04-13T19:24:55.663022721Z" level=info msg="CreateContainer within sandbox \"c622478843717d1e2b562b4ac52f701e93e1674dbd544d3a11d070b208c75b09\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 19:24:55.687971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848597433.mount: Deactivated successfully. Apr 13 19:24:55.691023 containerd[1924]: time="2026-04-13T19:24:55.689333621Z" level=info msg="CreateContainer within sandbox \"c622478843717d1e2b562b4ac52f701e93e1674dbd544d3a11d070b208c75b09\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fd9c8898369279c02a580b1b94e230a3da57cf538fc8adf88eef21f21b95b680\"" Apr 13 19:24:55.692209 containerd[1924]: time="2026-04-13T19:24:55.692141837Z" level=info msg="StartContainer for \"fd9c8898369279c02a580b1b94e230a3da57cf538fc8adf88eef21f21b95b680\"" Apr 13 19:24:55.750053 systemd[1]: Started cri-containerd-fd9c8898369279c02a580b1b94e230a3da57cf538fc8adf88eef21f21b95b680.scope - libcontainer container fd9c8898369279c02a580b1b94e230a3da57cf538fc8adf88eef21f21b95b680. Apr 13 19:24:55.829520 containerd[1924]: time="2026-04-13T19:24:55.829433934Z" level=info msg="StartContainer for \"fd9c8898369279c02a580b1b94e230a3da57cf538fc8adf88eef21f21b95b680\" returns successfully" Apr 13 19:24:56.184580 kubelet[3342]: E0413 19:24:56.184515 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.185468 kubelet[3342]: W0413 19:24:56.184546 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.185468 kubelet[3342]: E0413 19:24:56.184788 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.186697 kubelet[3342]: E0413 19:24:56.185932 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.186697 kubelet[3342]: W0413 19:24:56.185986 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.186697 kubelet[3342]: E0413 19:24:56.186170 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.188215 kubelet[3342]: E0413 19:24:56.187884 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.188215 kubelet[3342]: W0413 19:24:56.187922 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.188215 kubelet[3342]: E0413 19:24:56.187956 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.189398 kubelet[3342]: E0413 19:24:56.189350 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.189398 kubelet[3342]: W0413 19:24:56.189387 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.189631 kubelet[3342]: E0413 19:24:56.189420 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.190929 kubelet[3342]: E0413 19:24:56.190859 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.190929 kubelet[3342]: W0413 19:24:56.190901 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.191207 kubelet[3342]: E0413 19:24:56.191053 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.193268 kubelet[3342]: E0413 19:24:56.193214 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.193268 kubelet[3342]: W0413 19:24:56.193254 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.193518 kubelet[3342]: E0413 19:24:56.193289 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.193815 kubelet[3342]: E0413 19:24:56.193691 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.193815 kubelet[3342]: W0413 19:24:56.193721 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.193815 kubelet[3342]: E0413 19:24:56.193793 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.194874 kubelet[3342]: E0413 19:24:56.194824 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.194874 kubelet[3342]: W0413 19:24:56.194862 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.194874 kubelet[3342]: E0413 19:24:56.194893 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.196080 kubelet[3342]: E0413 19:24:56.196013 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.196080 kubelet[3342]: W0413 19:24:56.196056 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.196080 kubelet[3342]: E0413 19:24:56.196089 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.196609 kubelet[3342]: E0413 19:24:56.196562 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.196609 kubelet[3342]: W0413 19:24:56.196595 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.197236 kubelet[3342]: E0413 19:24:56.196624 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.198055 kubelet[3342]: E0413 19:24:56.198002 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.198055 kubelet[3342]: W0413 19:24:56.198042 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.198263 kubelet[3342]: E0413 19:24:56.198075 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.200903 kubelet[3342]: E0413 19:24:56.200841 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.200903 kubelet[3342]: W0413 19:24:56.200886 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.201115 kubelet[3342]: E0413 19:24:56.200922 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.201634 kubelet[3342]: E0413 19:24:56.201590 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.201634 kubelet[3342]: W0413 19:24:56.201626 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.201895 kubelet[3342]: E0413 19:24:56.201656 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.203901 kubelet[3342]: E0413 19:24:56.203847 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.203901 kubelet[3342]: W0413 19:24:56.203886 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.204109 kubelet[3342]: E0413 19:24:56.203918 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.204907 kubelet[3342]: E0413 19:24:56.204857 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.204907 kubelet[3342]: W0413 19:24:56.204894 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.205114 kubelet[3342]: E0413 19:24:56.204930 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.227033 kubelet[3342]: E0413 19:24:56.226980 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.227033 kubelet[3342]: W0413 19:24:56.227020 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.227268 kubelet[3342]: E0413 19:24:56.227055 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.227757 kubelet[3342]: E0413 19:24:56.227699 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.227757 kubelet[3342]: W0413 19:24:56.227752 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.227934 kubelet[3342]: E0413 19:24:56.227785 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.228412 kubelet[3342]: E0413 19:24:56.228370 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.228412 kubelet[3342]: W0413 19:24:56.228402 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.228559 kubelet[3342]: E0413 19:24:56.228432 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.231263 kubelet[3342]: E0413 19:24:56.231191 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.231263 kubelet[3342]: W0413 19:24:56.231235 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.231263 kubelet[3342]: E0413 19:24:56.231269 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.232347 kubelet[3342]: E0413 19:24:56.232305 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.232347 kubelet[3342]: W0413 19:24:56.232342 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.232510 kubelet[3342]: E0413 19:24:56.232375 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.232906 kubelet[3342]: E0413 19:24:56.232855 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.232906 kubelet[3342]: W0413 19:24:56.232888 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.233092 kubelet[3342]: E0413 19:24:56.232915 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.233942 kubelet[3342]: E0413 19:24:56.233901 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.233942 kubelet[3342]: W0413 19:24:56.233937 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.234084 kubelet[3342]: E0413 19:24:56.233970 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.234848 kubelet[3342]: E0413 19:24:56.234800 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.234848 kubelet[3342]: W0413 19:24:56.234836 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.235283 kubelet[3342]: E0413 19:24:56.234867 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.239523 kubelet[3342]: E0413 19:24:56.239440 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.239523 kubelet[3342]: W0413 19:24:56.239497 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.239776 kubelet[3342]: E0413 19:24:56.239532 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.240949 kubelet[3342]: E0413 19:24:56.240897 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.240949 kubelet[3342]: W0413 19:24:56.240936 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.241154 kubelet[3342]: E0413 19:24:56.240969 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.242953 kubelet[3342]: E0413 19:24:56.242898 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.242953 kubelet[3342]: W0413 19:24:56.242938 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.243144 kubelet[3342]: E0413 19:24:56.242972 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.244003 kubelet[3342]: E0413 19:24:56.243959 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.244003 kubelet[3342]: W0413 19:24:56.243995 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.244150 kubelet[3342]: E0413 19:24:56.244026 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.245453 kubelet[3342]: E0413 19:24:56.245404 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.245453 kubelet[3342]: W0413 19:24:56.245441 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.245642 kubelet[3342]: E0413 19:24:56.245474 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.247977 kubelet[3342]: E0413 19:24:56.247923 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.247977 kubelet[3342]: W0413 19:24:56.247971 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.248202 kubelet[3342]: E0413 19:24:56.248006 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.248634 kubelet[3342]: E0413 19:24:56.248587 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.249095 kubelet[3342]: W0413 19:24:56.248680 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.249095 kubelet[3342]: E0413 19:24:56.248711 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.250433 kubelet[3342]: E0413 19:24:56.250114 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.250433 kubelet[3342]: W0413 19:24:56.250160 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.250433 kubelet[3342]: E0413 19:24:56.250201 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.251928 kubelet[3342]: E0413 19:24:56.251294 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.251928 kubelet[3342]: W0413 19:24:56.251393 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.251928 kubelet[3342]: E0413 19:24:56.251850 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.252507 kubelet[3342]: E0413 19:24:56.252466 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:56.252507 kubelet[3342]: W0413 19:24:56.252500 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:56.252636 kubelet[3342]: E0413 19:24:56.252529 3342 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:56.935560 kubelet[3342]: E0413 19:24:56.933594 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:24:56.952503 containerd[1924]: time="2026-04-13T19:24:56.952425163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:56.954246 containerd[1924]: time="2026-04-13T19:24:56.954172915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 13 19:24:56.955923 containerd[1924]: time="2026-04-13T19:24:56.955827031Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:56.962049 containerd[1924]: time="2026-04-13T19:24:56.961977463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:56.964123 containerd[1924]: time="2026-04-13T19:24:56.963795355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.33208049s" Apr 13 19:24:56.964123 containerd[1924]: time="2026-04-13T19:24:56.963860359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 19:24:56.972928 containerd[1924]: time="2026-04-13T19:24:56.972695839Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 19:24:57.000972 containerd[1924]: time="2026-04-13T19:24:57.000913035Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892\"" Apr 13 19:24:57.001271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65192919.mount: Deactivated successfully. Apr 13 19:24:57.004208 containerd[1924]: time="2026-04-13T19:24:57.004130932Z" level=info msg="StartContainer for \"b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892\"" Apr 13 19:24:57.080109 systemd[1]: Started cri-containerd-b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892.scope - libcontainer container b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892. Apr 13 19:24:57.144405 containerd[1924]: time="2026-04-13T19:24:57.144263596Z" level=info msg="StartContainer for \"b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892\" returns successfully" Apr 13 19:24:57.170158 kubelet[3342]: I0413 19:24:57.170096 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:24:57.181572 systemd[1]: cri-containerd-b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892.scope: Deactivated successfully. Apr 13 19:24:57.210043 kubelet[3342]: I0413 19:24:57.209858 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-658965dd74-trdns" podStartSLOduration=2.650506348 podStartE2EDuration="5.209833721s" podCreationTimestamp="2026-04-13 19:24:52 +0000 UTC" firstStartedPulling="2026-04-13 19:24:53.069521076 +0000 UTC m=+29.406226647" lastFinishedPulling="2026-04-13 19:24:55.628848437 +0000 UTC m=+31.965554020" observedRunningTime="2026-04-13 19:24:56.238188868 +0000 UTC m=+32.574894451" watchObservedRunningTime="2026-04-13 19:24:57.209833721 +0000 UTC m=+33.546539292" Apr 13 19:24:57.638330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892-rootfs.mount: Deactivated successfully. Apr 13 19:24:57.693574 containerd[1924]: time="2026-04-13T19:24:57.693324607Z" level=info msg="shim disconnected" id=b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892 namespace=k8s.io Apr 13 19:24:57.693574 containerd[1924]: time="2026-04-13T19:24:57.693401119Z" level=warning msg="cleaning up after shim disconnected" id=b25769139f4e62fef415d8e40d3f53f765d161bd2a4a470ddac028a0683a7892 namespace=k8s.io Apr 13 19:24:57.693574 containerd[1924]: time="2026-04-13T19:24:57.693420919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:58.178095 containerd[1924]: time="2026-04-13T19:24:58.178001177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 19:24:58.932957 kubelet[3342]: E0413 19:24:58.932875 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:00.933759 kubelet[3342]: E0413 19:25:00.933679 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:02.933392 kubelet[3342]: E0413 19:25:02.932828 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:04.637773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952167928.mount: Deactivated successfully. Apr 13 19:25:04.707797 containerd[1924]: time="2026-04-13T19:25:04.707605982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:04.709703 containerd[1924]: time="2026-04-13T19:25:04.709636730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 19:25:04.711120 containerd[1924]: time="2026-04-13T19:25:04.711060314Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:04.717644 containerd[1924]: time="2026-04-13T19:25:04.716015330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:04.717644 containerd[1924]: time="2026-04-13T19:25:04.717427346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.539323401s" Apr 13 19:25:04.717644 containerd[1924]: time="2026-04-13T19:25:04.717489110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 19:25:04.724308 containerd[1924]: time="2026-04-13T19:25:04.724239566Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 19:25:04.750150 containerd[1924]: time="2026-04-13T19:25:04.750075530Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5\"" Apr 13 19:25:04.751188 containerd[1924]: time="2026-04-13T19:25:04.751143002Z" level=info msg="StartContainer for \"e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5\"" Apr 13 19:25:04.816283 systemd[1]: Started cri-containerd-e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5.scope - libcontainer container e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5. Apr 13 19:25:04.874758 containerd[1924]: time="2026-04-13T19:25:04.874349883Z" level=info msg="StartContainer for \"e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5\" returns successfully" Apr 13 19:25:04.933568 kubelet[3342]: E0413 19:25:04.933212 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:05.072195 systemd[1]: cri-containerd-e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5.scope: Deactivated successfully. Apr 13 19:25:05.635082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5-rootfs.mount: Deactivated successfully. Apr 13 19:25:05.691033 containerd[1924]: time="2026-04-13T19:25:05.690725127Z" level=info msg="shim disconnected" id=e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5 namespace=k8s.io Apr 13 19:25:05.691033 containerd[1924]: time="2026-04-13T19:25:05.690836091Z" level=warning msg="cleaning up after shim disconnected" id=e02ab91883d2b772e7bb1beae589ff8d57c6756d63d3cc24efc9c74e2f60edc5 namespace=k8s.io Apr 13 19:25:05.691033 containerd[1924]: time="2026-04-13T19:25:05.690861507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:06.210010 containerd[1924]: time="2026-04-13T19:25:06.209950441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 19:25:06.933447 kubelet[3342]: E0413 19:25:06.933369 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:08.932831 kubelet[3342]: E0413 19:25:08.932753 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:09.376415 containerd[1924]: time="2026-04-13T19:25:09.374841701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:09.377200 containerd[1924]: time="2026-04-13T19:25:09.377143817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 19:25:09.377457 containerd[1924]: time="2026-04-13T19:25:09.377377433Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:09.381541 containerd[1924]: time="2026-04-13T19:25:09.381475985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:09.384962 containerd[1924]: time="2026-04-13T19:25:09.384883697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.174864292s" Apr 13 19:25:09.384962 containerd[1924]: time="2026-04-13T19:25:09.384951593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 19:25:09.392258 containerd[1924]: time="2026-04-13T19:25:09.391151885Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 19:25:09.411968 containerd[1924]: time="2026-04-13T19:25:09.411886973Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169\"" Apr 13 19:25:09.416493 containerd[1924]: time="2026-04-13T19:25:09.413228093Z" level=info msg="StartContainer for \"0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169\"" Apr 13 19:25:09.475049 systemd[1]: Started cri-containerd-0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169.scope - libcontainer container 0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169. Apr 13 19:25:09.535394 containerd[1924]: time="2026-04-13T19:25:09.534600090Z" level=info msg="StartContainer for \"0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169\" returns successfully" Apr 13 19:25:10.610643 containerd[1924]: time="2026-04-13T19:25:10.610569331Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:25:10.615777 systemd[1]: cri-containerd-0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169.scope: Deactivated successfully. Apr 13 19:25:10.616236 systemd[1]: cri-containerd-0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169.scope: Consumed 1.018s CPU time. Apr 13 19:25:10.630820 kubelet[3342]: I0413 19:25:10.627822 3342 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 19:25:10.688441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169-rootfs.mount: Deactivated successfully. Apr 13 19:25:10.726420 systemd[1]: Created slice kubepods-besteffort-podfced93c3_4c99_4651_a35d_57e6eb8bc151.slice - libcontainer container kubepods-besteffort-podfced93c3_4c99_4651_a35d_57e6eb8bc151.slice. Apr 13 19:25:10.740241 kubelet[3342]: E0413 19:25:10.740138 3342 status_manager.go:1018] "Failed to get status for pod" err="pods \"calico-kube-controllers-6444d8f97b-b7bhq\" is forbidden: User \"system:node:ip-172-31-19-12\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-12' and this object" podUID="fced93c3-4c99-4651-a35d-57e6eb8bc151" pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" Apr 13 19:25:10.753198 systemd[1]: Created slice kubepods-burstable-pod145266ff_892c_4549_b337_19bfa44f9e42.slice - libcontainer container kubepods-burstable-pod145266ff_892c_4549_b337_19bfa44f9e42.slice. Apr 13 19:25:10.833200 systemd[1]: Created slice kubepods-besteffort-pod222d069d_fa76_4e04_b779_eb3d366b1a95.slice - libcontainer container kubepods-besteffort-pod222d069d_fa76_4e04_b779_eb3d366b1a95.slice. Apr 13 19:25:10.847416 kubelet[3342]: I0413 19:25:10.847326 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmnqx\" (UniqueName: \"kubernetes.io/projected/145266ff-892c-4549-b337-19bfa44f9e42-kube-api-access-cmnqx\") pod \"coredns-66bc5c9577-c4w5w\" (UID: \"145266ff-892c-4549-b337-19bfa44f9e42\") " pod="kube-system/coredns-66bc5c9577-c4w5w" Apr 13 19:25:10.847416 kubelet[3342]: I0413 19:25:10.847408 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xrl\" (UniqueName: \"kubernetes.io/projected/222d069d-fa76-4e04-b779-eb3d366b1a95-kube-api-access-n9xrl\") pod \"calico-apiserver-77999c4d5b-4qscg\" (UID: \"222d069d-fa76-4e04-b779-eb3d366b1a95\") " pod="calico-system/calico-apiserver-77999c4d5b-4qscg" Apr 13 19:25:10.848998 kubelet[3342]: I0413 19:25:10.847465 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6kt\" (UniqueName: \"kubernetes.io/projected/fced93c3-4c99-4651-a35d-57e6eb8bc151-kube-api-access-7q6kt\") pod \"calico-kube-controllers-6444d8f97b-b7bhq\" (UID: \"fced93c3-4c99-4651-a35d-57e6eb8bc151\") " pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" Apr 13 19:25:10.848998 kubelet[3342]: I0413 19:25:10.847506 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff8ff070-0d4d-4815-8703-aa78cce64b54-calico-apiserver-certs\") pod \"calico-apiserver-77999c4d5b-4q7t4\" (UID: \"ff8ff070-0d4d-4815-8703-aa78cce64b54\") " pod="calico-system/calico-apiserver-77999c4d5b-4q7t4" Apr 13 19:25:10.848998 kubelet[3342]: I0413 19:25:10.847546 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/222d069d-fa76-4e04-b779-eb3d366b1a95-calico-apiserver-certs\") pod \"calico-apiserver-77999c4d5b-4qscg\" (UID: \"222d069d-fa76-4e04-b779-eb3d366b1a95\") " pod="calico-system/calico-apiserver-77999c4d5b-4qscg" Apr 13 19:25:10.848998 kubelet[3342]: I0413 19:25:10.847588 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145266ff-892c-4549-b337-19bfa44f9e42-config-volume\") pod \"coredns-66bc5c9577-c4w5w\" (UID: \"145266ff-892c-4549-b337-19bfa44f9e42\") " pod="kube-system/coredns-66bc5c9577-c4w5w" Apr 13 19:25:10.848998 kubelet[3342]: I0413 19:25:10.847630 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fced93c3-4c99-4651-a35d-57e6eb8bc151-tigera-ca-bundle\") pod \"calico-kube-controllers-6444d8f97b-b7bhq\" (UID: \"fced93c3-4c99-4651-a35d-57e6eb8bc151\") " pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" Apr 13 19:25:10.850198 kubelet[3342]: I0413 19:25:10.847679 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nv5n\" (UniqueName: \"kubernetes.io/projected/ff8ff070-0d4d-4815-8703-aa78cce64b54-kube-api-access-7nv5n\") pod \"calico-apiserver-77999c4d5b-4q7t4\" (UID: \"ff8ff070-0d4d-4815-8703-aa78cce64b54\") " pod="calico-system/calico-apiserver-77999c4d5b-4q7t4" Apr 13 19:25:10.871313 systemd[1]: Created slice kubepods-besteffort-podff8ff070_0d4d_4815_8703_aa78cce64b54.slice - libcontainer container kubepods-besteffort-podff8ff070_0d4d_4815_8703_aa78cce64b54.slice. Apr 13 19:25:10.919921 systemd[1]: Created slice kubepods-besteffort-pod4abfcee3_399e_4a75_8885_c3e8ec391b58.slice - libcontainer container kubepods-besteffort-pod4abfcee3_399e_4a75_8885_c3e8ec391b58.slice. Apr 13 19:25:10.984637 systemd[1]: Created slice kubepods-besteffort-pod075d2244_7605_481e_bd76_956a508f7aee.slice - libcontainer container kubepods-besteffort-pod075d2244_7605_481e_bd76_956a508f7aee.slice. Apr 13 19:25:11.048292 systemd[1]: Created slice kubepods-besteffort-pod5bbd3846_92e6_469c_993d_c2ef707609bb.slice - libcontainer container kubepods-besteffort-pod5bbd3846_92e6_469c_993d_c2ef707609bb.slice. Apr 13 19:25:11.049503 kubelet[3342]: I0413 19:25:11.049453 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4abfcee3-399e-4a75-8885-c3e8ec391b58-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-8wg88\" (UID: \"4abfcee3-399e-4a75-8885-c3e8ec391b58\") " pod="calico-system/goldmane-cccfbd5cf-8wg88" Apr 13 19:25:11.049621 kubelet[3342]: I0413 19:25:11.049573 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmn6m\" (UniqueName: \"kubernetes.io/projected/4abfcee3-399e-4a75-8885-c3e8ec391b58-kube-api-access-bmn6m\") pod \"goldmane-cccfbd5cf-8wg88\" (UID: \"4abfcee3-399e-4a75-8885-c3e8ec391b58\") " pod="calico-system/goldmane-cccfbd5cf-8wg88" Apr 13 19:25:11.051344 kubelet[3342]: I0413 19:25:11.049687 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4abfcee3-399e-4a75-8885-c3e8ec391b58-config\") pod \"goldmane-cccfbd5cf-8wg88\" (UID: \"4abfcee3-399e-4a75-8885-c3e8ec391b58\") " pod="calico-system/goldmane-cccfbd5cf-8wg88" Apr 13 19:25:11.051344 kubelet[3342]: I0413 19:25:11.049774 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4abfcee3-399e-4a75-8885-c3e8ec391b58-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-8wg88\" (UID: \"4abfcee3-399e-4a75-8885-c3e8ec391b58\") " pod="calico-system/goldmane-cccfbd5cf-8wg88" Apr 13 19:25:11.071641 systemd[1]: Created slice kubepods-burstable-podd45a62d9_5e0c_4809_865b_4362b930842e.slice - libcontainer container kubepods-burstable-podd45a62d9_5e0c_4809_865b_4362b930842e.slice. Apr 13 19:25:11.114869 containerd[1924]: time="2026-04-13T19:25:11.113982006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzt8j,Uid:5bbd3846-92e6-469c-993d-c2ef707609bb,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:11.150692 kubelet[3342]: I0413 19:25:11.150458 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-nginx-config\") pod \"whisker-7895fb4cdc-sjfzr\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " pod="calico-system/whisker-7895fb4cdc-sjfzr" Apr 13 19:25:11.150692 kubelet[3342]: I0413 19:25:11.150566 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-whisker-ca-bundle\") pod \"whisker-7895fb4cdc-sjfzr\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " pod="calico-system/whisker-7895fb4cdc-sjfzr" Apr 13 19:25:11.150692 kubelet[3342]: I0413 19:25:11.150605 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t565x\" (UniqueName: \"kubernetes.io/projected/075d2244-7605-481e-bd76-956a508f7aee-kube-api-access-t565x\") pod \"whisker-7895fb4cdc-sjfzr\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " pod="calico-system/whisker-7895fb4cdc-sjfzr" Apr 13 19:25:11.150692 kubelet[3342]: I0413 19:25:11.150646 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxcn\" (UniqueName: \"kubernetes.io/projected/d45a62d9-5e0c-4809-865b-4362b930842e-kube-api-access-qcxcn\") pod \"coredns-66bc5c9577-2478b\" (UID: \"d45a62d9-5e0c-4809-865b-4362b930842e\") " pod="kube-system/coredns-66bc5c9577-2478b" Apr 13 19:25:11.151043 kubelet[3342]: I0413 19:25:11.150770 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/075d2244-7605-481e-bd76-956a508f7aee-whisker-backend-key-pair\") pod \"whisker-7895fb4cdc-sjfzr\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " pod="calico-system/whisker-7895fb4cdc-sjfzr" Apr 13 19:25:11.151043 kubelet[3342]: I0413 19:25:11.150810 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d45a62d9-5e0c-4809-865b-4362b930842e-config-volume\") pod \"coredns-66bc5c9577-2478b\" (UID: \"d45a62d9-5e0c-4809-865b-4362b930842e\") " pod="kube-system/coredns-66bc5c9577-2478b" Apr 13 19:25:11.237556 containerd[1924]: time="2026-04-13T19:25:11.236788530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4qscg,Uid:222d069d-fa76-4e04-b779-eb3d366b1a95,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:11.248873 containerd[1924]: time="2026-04-13T19:25:11.248612910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4q7t4,Uid:ff8ff070-0d4d-4815-8703-aa78cce64b54,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:11.253456 containerd[1924]: time="2026-04-13T19:25:11.253217550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8wg88,Uid:4abfcee3-399e-4a75-8885-c3e8ec391b58,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:11.257090 containerd[1924]: time="2026-04-13T19:25:11.256979274Z" level=info msg="shim disconnected" id=0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169 namespace=k8s.io Apr 13 19:25:11.257090 containerd[1924]: time="2026-04-13T19:25:11.257080074Z" level=warning msg="cleaning up after shim disconnected" id=0769bbb6be45bfb17755c194d2054986a5a8a2e9c71ce80723558e82f5b40169 namespace=k8s.io Apr 13 19:25:11.257090 containerd[1924]: time="2026-04-13T19:25:11.257106294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:11.350058 containerd[1924]: time="2026-04-13T19:25:11.349171579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6444d8f97b-b7bhq,Uid:fced93c3-4c99-4651-a35d-57e6eb8bc151,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:11.377400 containerd[1924]: time="2026-04-13T19:25:11.377297923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c4w5w,Uid:145266ff-892c-4549-b337-19bfa44f9e42,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:11.405577 containerd[1924]: time="2026-04-13T19:25:11.405044251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2478b,Uid:d45a62d9-5e0c-4809-865b-4362b930842e,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:11.627672 containerd[1924]: time="2026-04-13T19:25:11.627612872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7895fb4cdc-sjfzr,Uid:075d2244-7605-481e-bd76-956a508f7aee,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:11.935795 containerd[1924]: time="2026-04-13T19:25:11.935508514Z" level=error msg="Failed to destroy network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.945831 containerd[1924]: time="2026-04-13T19:25:11.940074430Z" level=error msg="Failed to destroy network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.946502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd-shm.mount: Deactivated successfully. Apr 13 19:25:11.958936 containerd[1924]: time="2026-04-13T19:25:11.956656522Z" level=error msg="encountered an error cleaning up failed sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.958936 containerd[1924]: time="2026-04-13T19:25:11.956785858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4q7t4,Uid:ff8ff070-0d4d-4815-8703-aa78cce64b54,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.959209 kubelet[3342]: E0413 19:25:11.957315 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.959209 kubelet[3342]: E0413 19:25:11.957399 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77999c4d5b-4q7t4" Apr 13 19:25:11.959209 kubelet[3342]: E0413 19:25:11.957448 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77999c4d5b-4q7t4" Apr 13 19:25:11.959931 kubelet[3342]: E0413 19:25:11.957547 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77999c4d5b-4q7t4_calico-system(ff8ff070-0d4d-4815-8703-aa78cce64b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77999c4d5b-4q7t4_calico-system(ff8ff070-0d4d-4815-8703-aa78cce64b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77999c4d5b-4q7t4" podUID="ff8ff070-0d4d-4815-8703-aa78cce64b54" Apr 13 19:25:11.960139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7-shm.mount: Deactivated successfully. Apr 13 19:25:11.967900 containerd[1924]: time="2026-04-13T19:25:11.967809922Z" level=error msg="encountered an error cleaning up failed sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.968476 containerd[1924]: time="2026-04-13T19:25:11.968308486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzt8j,Uid:5bbd3846-92e6-469c-993d-c2ef707609bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.969332 kubelet[3342]: E0413 19:25:11.969262 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.969332 kubelet[3342]: E0413 19:25:11.969351 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzt8j" Apr 13 19:25:11.969332 kubelet[3342]: E0413 19:25:11.969393 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzt8j" Apr 13 19:25:11.969791 kubelet[3342]: E0413 19:25:11.969494 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nzt8j_calico-system(5bbd3846-92e6-469c-993d-c2ef707609bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nzt8j_calico-system(5bbd3846-92e6-469c-993d-c2ef707609bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:11.984026 containerd[1924]: time="2026-04-13T19:25:11.983934850Z" level=error msg="Failed to destroy network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.989138 containerd[1924]: time="2026-04-13T19:25:11.984531694Z" level=error msg="encountered an error cleaning up failed sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.989138 containerd[1924]: time="2026-04-13T19:25:11.984612862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6444d8f97b-b7bhq,Uid:fced93c3-4c99-4651-a35d-57e6eb8bc151,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.989415 kubelet[3342]: E0413 19:25:11.984962 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.989415 kubelet[3342]: E0413 19:25:11.985042 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" Apr 13 19:25:11.989415 kubelet[3342]: E0413 19:25:11.985075 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" Apr 13 19:25:11.989803 kubelet[3342]: E0413 19:25:11.985154 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6444d8f97b-b7bhq_calico-system(fced93c3-4c99-4651-a35d-57e6eb8bc151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6444d8f97b-b7bhq_calico-system(fced93c3-4c99-4651-a35d-57e6eb8bc151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" podUID="fced93c3-4c99-4651-a35d-57e6eb8bc151" Apr 13 19:25:11.996694 containerd[1924]: time="2026-04-13T19:25:11.996536986Z" level=error msg="Failed to destroy network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.997132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44-shm.mount: Deactivated successfully. Apr 13 19:25:12.002587 containerd[1924]: time="2026-04-13T19:25:12.002334690Z" level=error msg="encountered an error cleaning up failed sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.002587 containerd[1924]: time="2026-04-13T19:25:12.002433378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4qscg,Uid:222d069d-fa76-4e04-b779-eb3d366b1a95,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.002879 kubelet[3342]: E0413 19:25:12.002801 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.002959 kubelet[3342]: E0413 19:25:12.002874 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77999c4d5b-4qscg" Apr 13 19:25:12.002959 kubelet[3342]: E0413 19:25:12.002906 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77999c4d5b-4qscg" Apr 13 19:25:12.003248 kubelet[3342]: E0413 19:25:12.003005 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77999c4d5b-4qscg_calico-system(222d069d-fa76-4e04-b779-eb3d366b1a95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77999c4d5b-4qscg_calico-system(222d069d-fa76-4e04-b779-eb3d366b1a95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77999c4d5b-4qscg" podUID="222d069d-fa76-4e04-b779-eb3d366b1a95" Apr 13 19:25:12.031691 containerd[1924]: time="2026-04-13T19:25:12.031604178Z" level=error msg="Failed to destroy network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.034567 containerd[1924]: time="2026-04-13T19:25:12.034303914Z" level=error msg="encountered an error cleaning up failed sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.034567 containerd[1924]: time="2026-04-13T19:25:12.034417242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8wg88,Uid:4abfcee3-399e-4a75-8885-c3e8ec391b58,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.035971 kubelet[3342]: E0413 19:25:12.035804 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.035971 kubelet[3342]: E0413 19:25:12.035899 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8wg88" Apr 13 19:25:12.035971 kubelet[3342]: E0413 19:25:12.035940 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8wg88" Apr 13 19:25:12.037493 kubelet[3342]: E0413 19:25:12.036034 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-8wg88_calico-system(4abfcee3-399e-4a75-8885-c3e8ec391b58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-8wg88_calico-system(4abfcee3-399e-4a75-8885-c3e8ec391b58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-8wg88" podUID="4abfcee3-399e-4a75-8885-c3e8ec391b58" Apr 13 19:25:12.052948 containerd[1924]: time="2026-04-13T19:25:12.052041210Z" level=error msg="Failed to destroy network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.052948 containerd[1924]: time="2026-04-13T19:25:12.052677126Z" level=error msg="encountered an error cleaning up failed sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.052948 containerd[1924]: time="2026-04-13T19:25:12.052790874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2478b,Uid:d45a62d9-5e0c-4809-865b-4362b930842e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.053997 kubelet[3342]: E0413 19:25:12.053400 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.053997 kubelet[3342]: E0413 19:25:12.053504 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2478b" Apr 13 19:25:12.053997 kubelet[3342]: E0413 19:25:12.053549 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2478b" Apr 13 19:25:12.054276 kubelet[3342]: E0413 19:25:12.053649 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2478b_kube-system(d45a62d9-5e0c-4809-865b-4362b930842e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2478b_kube-system(d45a62d9-5e0c-4809-865b-4362b930842e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2478b" podUID="d45a62d9-5e0c-4809-865b-4362b930842e" Apr 13 19:25:12.055404 containerd[1924]: time="2026-04-13T19:25:12.055167234Z" level=error msg="Failed to destroy network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.056198 containerd[1924]: time="2026-04-13T19:25:12.056124822Z" level=error msg="encountered an error cleaning up failed sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.056481 containerd[1924]: time="2026-04-13T19:25:12.056407146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c4w5w,Uid:145266ff-892c-4549-b337-19bfa44f9e42,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.057920 kubelet[3342]: E0413 19:25:12.057803 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.057920 kubelet[3342]: E0413 19:25:12.057906 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-c4w5w" Apr 13 19:25:12.058276 kubelet[3342]: E0413 19:25:12.057944 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-c4w5w" Apr 13 19:25:12.058276 kubelet[3342]: E0413 19:25:12.058016 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-c4w5w_kube-system(145266ff-892c-4549-b337-19bfa44f9e42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-c4w5w_kube-system(145266ff-892c-4549-b337-19bfa44f9e42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-c4w5w" podUID="145266ff-892c-4549-b337-19bfa44f9e42" Apr 13 19:25:12.085348 containerd[1924]: time="2026-04-13T19:25:12.085271538Z" level=error msg="Failed to destroy network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.085934 containerd[1924]: time="2026-04-13T19:25:12.085879722Z" level=error msg="encountered an error cleaning up failed sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.086049 containerd[1924]: time="2026-04-13T19:25:12.085967538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7895fb4cdc-sjfzr,Uid:075d2244-7605-481e-bd76-956a508f7aee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.086349 kubelet[3342]: E0413 19:25:12.086296 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.086532 kubelet[3342]: E0413 19:25:12.086375 3342 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7895fb4cdc-sjfzr" Apr 13 19:25:12.086532 kubelet[3342]: E0413 19:25:12.086409 3342 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7895fb4cdc-sjfzr" Apr 13 19:25:12.087014 kubelet[3342]: E0413 19:25:12.086885 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7895fb4cdc-sjfzr_calico-system(075d2244-7605-481e-bd76-956a508f7aee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7895fb4cdc-sjfzr_calico-system(075d2244-7605-481e-bd76-956a508f7aee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7895fb4cdc-sjfzr" podUID="075d2244-7605-481e-bd76-956a508f7aee" Apr 13 19:25:12.236671 kubelet[3342]: I0413 19:25:12.236366 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:25:12.238822 containerd[1924]: time="2026-04-13T19:25:12.238683331Z" level=info msg="StopPodSandbox for \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\"" Apr 13 19:25:12.239936 containerd[1924]: time="2026-04-13T19:25:12.239711827Z" level=info msg="Ensure that sandbox e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd in task-service has been cleanup successfully" Apr 13 19:25:12.243310 kubelet[3342]: I0413 19:25:12.242673 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:25:12.244062 containerd[1924]: time="2026-04-13T19:25:12.243830539Z" level=info msg="StopPodSandbox for \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\"" Apr 13 19:25:12.245795 containerd[1924]: time="2026-04-13T19:25:12.245460919Z" level=info msg="Ensure that sandbox c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3 in task-service has been cleanup successfully" Apr 13 19:25:12.249422 kubelet[3342]: I0413 19:25:12.249243 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:25:12.253946 containerd[1924]: time="2026-04-13T19:25:12.252000331Z" level=info msg="StopPodSandbox for \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\"" Apr 13 19:25:12.253946 containerd[1924]: time="2026-04-13T19:25:12.252610435Z" level=info msg="Ensure that sandbox 9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44 in task-service has been cleanup successfully" Apr 13 19:25:12.262571 kubelet[3342]: I0413 19:25:12.262516 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:25:12.265646 containerd[1924]: time="2026-04-13T19:25:12.265572655Z" level=info msg="StopPodSandbox for \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\"" Apr 13 19:25:12.268153 containerd[1924]: time="2026-04-13T19:25:12.267990703Z" level=info msg="Ensure that sandbox 3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7 in task-service has been cleanup successfully" Apr 13 19:25:12.277205 kubelet[3342]: I0413 19:25:12.277086 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:12.280034 containerd[1924]: time="2026-04-13T19:25:12.279956731Z" level=info msg="StopPodSandbox for \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\"" Apr 13 19:25:12.280766 containerd[1924]: time="2026-04-13T19:25:12.280443223Z" level=info msg="Ensure that sandbox a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6 in task-service has been cleanup successfully" Apr 13 19:25:12.289530 kubelet[3342]: I0413 19:25:12.289456 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:25:12.292517 containerd[1924]: time="2026-04-13T19:25:12.292199083Z" level=info msg="StopPodSandbox for \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\"" Apr 13 19:25:12.296786 containerd[1924]: time="2026-04-13T19:25:12.296710027Z" level=info msg="Ensure that sandbox cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625 in task-service has been cleanup successfully" Apr 13 19:25:12.306054 kubelet[3342]: I0413 19:25:12.306001 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:25:12.310520 containerd[1924]: time="2026-04-13T19:25:12.310423304Z" level=info msg="StopPodSandbox for \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\"" Apr 13 19:25:12.310953 containerd[1924]: time="2026-04-13T19:25:12.310722620Z" level=info msg="Ensure that sandbox a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918 in task-service has been cleanup successfully" Apr 13 19:25:12.320308 kubelet[3342]: I0413 19:25:12.320186 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:12.324320 containerd[1924]: time="2026-04-13T19:25:12.324253784Z" level=info msg="StopPodSandbox for \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\"" Apr 13 19:25:12.326592 containerd[1924]: time="2026-04-13T19:25:12.326032916Z" level=info msg="Ensure that sandbox 070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead in task-service has been cleanup successfully" Apr 13 19:25:12.405836 containerd[1924]: time="2026-04-13T19:25:12.405778688Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 19:25:12.484848 containerd[1924]: time="2026-04-13T19:25:12.484787012Z" level=info msg="CreateContainer within sandbox \"7a064229687a359c7abe86ae2f4e18ddca1b0343ac314f301b903343be89bfa1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c5a8d344594749a0a85d2b1769031fe2debd9af1247b8ea58d90a841018581d4\"" Apr 13 19:25:12.491562 containerd[1924]: time="2026-04-13T19:25:12.491072780Z" level=info msg="StartContainer for \"c5a8d344594749a0a85d2b1769031fe2debd9af1247b8ea58d90a841018581d4\"" Apr 13 19:25:12.577119 containerd[1924]: time="2026-04-13T19:25:12.577004721Z" level=error msg="StopPodSandbox for \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\" failed" error="failed to destroy network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.577661 kubelet[3342]: E0413 19:25:12.577585 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:25:12.577855 kubelet[3342]: E0413 19:25:12.577668 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3"} Apr 13 19:25:12.577855 kubelet[3342]: E0413 19:25:12.577774 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"145266ff-892c-4549-b337-19bfa44f9e42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.577855 kubelet[3342]: E0413 19:25:12.577822 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"145266ff-892c-4549-b337-19bfa44f9e42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-c4w5w" podUID="145266ff-892c-4549-b337-19bfa44f9e42" Apr 13 19:25:12.600502 containerd[1924]: time="2026-04-13T19:25:12.600436233Z" level=error msg="StopPodSandbox for \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\" failed" error="failed to destroy network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.601068 kubelet[3342]: E0413 19:25:12.601010 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:12.601338 kubelet[3342]: E0413 19:25:12.601293 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6"} Apr 13 19:25:12.601539 kubelet[3342]: E0413 19:25:12.601488 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"075d2244-7605-481e-bd76-956a508f7aee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.602804 kubelet[3342]: E0413 19:25:12.601891 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"075d2244-7605-481e-bd76-956a508f7aee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7895fb4cdc-sjfzr" podUID="075d2244-7605-481e-bd76-956a508f7aee" Apr 13 19:25:12.604701 containerd[1924]: time="2026-04-13T19:25:12.604637205Z" level=error msg="StopPodSandbox for \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\" failed" error="failed to destroy network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.605322 kubelet[3342]: E0413 19:25:12.605244 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:25:12.605813 kubelet[3342]: E0413 19:25:12.605583 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd"} Apr 13 19:25:12.605813 kubelet[3342]: E0413 19:25:12.605658 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff8ff070-0d4d-4815-8703-aa78cce64b54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.605813 kubelet[3342]: E0413 19:25:12.605704 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff8ff070-0d4d-4815-8703-aa78cce64b54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77999c4d5b-4q7t4" podUID="ff8ff070-0d4d-4815-8703-aa78cce64b54" Apr 13 19:25:12.613633 containerd[1924]: time="2026-04-13T19:25:12.612964269Z" level=error msg="StopPodSandbox for \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\" failed" error="failed to destroy network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.613966 kubelet[3342]: E0413 19:25:12.613308 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:25:12.613966 kubelet[3342]: E0413 19:25:12.613375 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44"} Apr 13 19:25:12.613966 kubelet[3342]: E0413 19:25:12.613425 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fced93c3-4c99-4651-a35d-57e6eb8bc151\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.613966 kubelet[3342]: E0413 19:25:12.613510 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fced93c3-4c99-4651-a35d-57e6eb8bc151\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" podUID="fced93c3-4c99-4651-a35d-57e6eb8bc151" Apr 13 19:25:12.642422 containerd[1924]: time="2026-04-13T19:25:12.641487489Z" level=error msg="StopPodSandbox for \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\" failed" error="failed to destroy network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.643170 kubelet[3342]: E0413 19:25:12.641966 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:25:12.643170 kubelet[3342]: E0413 19:25:12.642054 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7"} Apr 13 19:25:12.643170 kubelet[3342]: E0413 19:25:12.642116 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5bbd3846-92e6-469c-993d-c2ef707609bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.643170 kubelet[3342]: E0413 19:25:12.642162 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5bbd3846-92e6-469c-993d-c2ef707609bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nzt8j" podUID="5bbd3846-92e6-469c-993d-c2ef707609bb" Apr 13 19:25:12.653396 containerd[1924]: time="2026-04-13T19:25:12.653328225Z" level=error msg="StopPodSandbox for \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\" failed" error="failed to destroy network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.654092 containerd[1924]: time="2026-04-13T19:25:12.653958321Z" level=error msg="StopPodSandbox for \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\" failed" error="failed to destroy network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.654294 kubelet[3342]: E0413 19:25:12.654089 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:25:12.654294 kubelet[3342]: E0413 19:25:12.654154 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625"} Apr 13 19:25:12.654294 kubelet[3342]: E0413 19:25:12.654205 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"222d069d-fa76-4e04-b779-eb3d366b1a95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.654294 kubelet[3342]: E0413 19:25:12.654278 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"222d069d-fa76-4e04-b779-eb3d366b1a95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77999c4d5b-4qscg" podUID="222d069d-fa76-4e04-b779-eb3d366b1a95" Apr 13 19:25:12.655395 kubelet[3342]: E0413 19:25:12.655159 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:12.655395 kubelet[3342]: E0413 19:25:12.655222 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead"} Apr 13 19:25:12.655395 kubelet[3342]: E0413 19:25:12.655270 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4abfcee3-399e-4a75-8885-c3e8ec391b58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.655395 kubelet[3342]: E0413 19:25:12.655312 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4abfcee3-399e-4a75-8885-c3e8ec391b58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-8wg88" podUID="4abfcee3-399e-4a75-8885-c3e8ec391b58" Apr 13 19:25:12.677585 containerd[1924]: time="2026-04-13T19:25:12.677041641Z" level=error msg="StopPodSandbox for \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\" failed" error="failed to destroy network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:12.679929 kubelet[3342]: E0413 19:25:12.678917 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:25:12.679929 kubelet[3342]: E0413 19:25:12.678984 3342 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918"} Apr 13 19:25:12.679929 kubelet[3342]: E0413 19:25:12.679037 3342 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d45a62d9-5e0c-4809-865b-4362b930842e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 19:25:12.679929 kubelet[3342]: E0413 19:25:12.679082 3342 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d45a62d9-5e0c-4809-865b-4362b930842e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2478b" podUID="d45a62d9-5e0c-4809-865b-4362b930842e" Apr 13 19:25:12.690996 systemd[1]: Started cri-containerd-c5a8d344594749a0a85d2b1769031fe2debd9af1247b8ea58d90a841018581d4.scope - libcontainer container c5a8d344594749a0a85d2b1769031fe2debd9af1247b8ea58d90a841018581d4. Apr 13 19:25:12.707082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6-shm.mount: Deactivated successfully. Apr 13 19:25:12.707492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918-shm.mount: Deactivated successfully. Apr 13 19:25:12.707816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3-shm.mount: Deactivated successfully. Apr 13 19:25:12.707964 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead-shm.mount: Deactivated successfully. Apr 13 19:25:12.708100 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625-shm.mount: Deactivated successfully. Apr 13 19:25:12.760694 containerd[1924]: time="2026-04-13T19:25:12.759527482Z" level=info msg="StartContainer for \"c5a8d344594749a0a85d2b1769031fe2debd9af1247b8ea58d90a841018581d4\" returns successfully" Apr 13 19:25:13.361559 containerd[1924]: time="2026-04-13T19:25:13.361480149Z" level=info msg="StopPodSandbox for \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\"" Apr 13 19:25:13.430771 kubelet[3342]: I0413 19:25:13.429972 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tzgn7" podStartSLOduration=5.18588646 podStartE2EDuration="21.429945393s" podCreationTimestamp="2026-04-13 19:24:52 +0000 UTC" firstStartedPulling="2026-04-13 19:24:53.141895692 +0000 UTC m=+29.478601263" lastFinishedPulling="2026-04-13 19:25:09.385954637 +0000 UTC m=+45.722660196" observedRunningTime="2026-04-13 19:25:13.426959649 +0000 UTC m=+49.763665244" watchObservedRunningTime="2026-04-13 19:25:13.429945393 +0000 UTC m=+49.766650988" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.486 [INFO][4608] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.486 [INFO][4608] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" iface="eth0" netns="/var/run/netns/cni-156824af-b4cc-5600-6338-7364373406b5" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.487 [INFO][4608] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" iface="eth0" netns="/var/run/netns/cni-156824af-b4cc-5600-6338-7364373406b5" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.488 [INFO][4608] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" iface="eth0" netns="/var/run/netns/cni-156824af-b4cc-5600-6338-7364373406b5" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.488 [INFO][4608] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.488 [INFO][4608] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.570 [INFO][4615] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.570 [INFO][4615] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.570 [INFO][4615] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.584 [WARNING][4615] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.585 [INFO][4615] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.587 [INFO][4615] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.597104 containerd[1924]: 2026-04-13 19:25:13.593 [INFO][4608] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:13.599583 containerd[1924]: time="2026-04-13T19:25:13.598167550Z" level=info msg="TearDown network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\" successfully" Apr 13 19:25:13.599583 containerd[1924]: time="2026-04-13T19:25:13.598264570Z" level=info msg="StopPodSandbox for \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\" returns successfully" Apr 13 19:25:13.604901 systemd[1]: run-netns-cni\x2d156824af\x2db4cc\x2d5600\x2d6338\x2d7364373406b5.mount: Deactivated successfully. Apr 13 19:25:13.771980 kubelet[3342]: I0413 19:25:13.771809 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t565x\" (UniqueName: \"kubernetes.io/projected/075d2244-7605-481e-bd76-956a508f7aee-kube-api-access-t565x\") pod \"075d2244-7605-481e-bd76-956a508f7aee\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " Apr 13 19:25:13.771980 kubelet[3342]: I0413 19:25:13.771886 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-nginx-config\") pod \"075d2244-7605-481e-bd76-956a508f7aee\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " Apr 13 19:25:13.771980 kubelet[3342]: I0413 19:25:13.771929 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/075d2244-7605-481e-bd76-956a508f7aee-whisker-backend-key-pair\") pod \"075d2244-7605-481e-bd76-956a508f7aee\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " Apr 13 19:25:13.772258 kubelet[3342]: I0413 19:25:13.771997 3342 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-whisker-ca-bundle\") pod \"075d2244-7605-481e-bd76-956a508f7aee\" (UID: \"075d2244-7605-481e-bd76-956a508f7aee\") " Apr 13 19:25:13.773472 kubelet[3342]: I0413 19:25:13.772809 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "075d2244-7605-481e-bd76-956a508f7aee" (UID: "075d2244-7605-481e-bd76-956a508f7aee"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:25:13.773472 kubelet[3342]: I0413 19:25:13.772853 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "075d2244-7605-481e-bd76-956a508f7aee" (UID: "075d2244-7605-481e-bd76-956a508f7aee"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:25:13.780133 kubelet[3342]: I0413 19:25:13.779594 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075d2244-7605-481e-bd76-956a508f7aee-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "075d2244-7605-481e-bd76-956a508f7aee" (UID: "075d2244-7605-481e-bd76-956a508f7aee"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:25:13.782131 kubelet[3342]: I0413 19:25:13.782063 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075d2244-7605-481e-bd76-956a508f7aee-kube-api-access-t565x" (OuterVolumeSpecName: "kube-api-access-t565x") pod "075d2244-7605-481e-bd76-956a508f7aee" (UID: "075d2244-7605-481e-bd76-956a508f7aee"). InnerVolumeSpecName "kube-api-access-t565x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:25:13.786303 systemd[1]: var-lib-kubelet-pods-075d2244\x2d7605\x2d481e\x2dbd76\x2d956a508f7aee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt565x.mount: Deactivated successfully. Apr 13 19:25:13.786592 systemd[1]: var-lib-kubelet-pods-075d2244\x2d7605\x2d481e\x2dbd76\x2d956a508f7aee-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 19:25:13.872869 kubelet[3342]: I0413 19:25:13.872810 3342 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t565x\" (UniqueName: \"kubernetes.io/projected/075d2244-7605-481e-bd76-956a508f7aee-kube-api-access-t565x\") on node \"ip-172-31-19-12\" DevicePath \"\"" Apr 13 19:25:13.872869 kubelet[3342]: I0413 19:25:13.872864 3342 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-nginx-config\") on node \"ip-172-31-19-12\" DevicePath \"\"" Apr 13 19:25:13.873101 kubelet[3342]: I0413 19:25:13.872888 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/075d2244-7605-481e-bd76-956a508f7aee-whisker-backend-key-pair\") on node \"ip-172-31-19-12\" DevicePath \"\"" Apr 13 19:25:13.873101 kubelet[3342]: I0413 19:25:13.872910 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/075d2244-7605-481e-bd76-956a508f7aee-whisker-ca-bundle\") on node \"ip-172-31-19-12\" DevicePath \"\"" Apr 13 19:25:13.949194 systemd[1]: Removed slice kubepods-besteffort-pod075d2244_7605_481e_bd76_956a508f7aee.slice - libcontainer container kubepods-besteffort-pod075d2244_7605_481e_bd76_956a508f7aee.slice. Apr 13 19:25:14.522869 systemd[1]: Created slice kubepods-besteffort-pod3216bf4d_0c0f_4bbc_8f19_478eea9ec5a0.slice - libcontainer container kubepods-besteffort-pod3216bf4d_0c0f_4bbc_8f19_478eea9ec5a0.slice. Apr 13 19:25:14.682963 kubelet[3342]: I0413 19:25:14.682889 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0-whisker-backend-key-pair\") pod \"whisker-869cbc8dd4-sldvw\" (UID: \"3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0\") " pod="calico-system/whisker-869cbc8dd4-sldvw" Apr 13 19:25:14.683552 kubelet[3342]: I0413 19:25:14.682970 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0-nginx-config\") pod \"whisker-869cbc8dd4-sldvw\" (UID: \"3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0\") " pod="calico-system/whisker-869cbc8dd4-sldvw" Apr 13 19:25:14.683552 kubelet[3342]: I0413 19:25:14.683021 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0-whisker-ca-bundle\") pod \"whisker-869cbc8dd4-sldvw\" (UID: \"3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0\") " pod="calico-system/whisker-869cbc8dd4-sldvw" Apr 13 19:25:14.683552 kubelet[3342]: I0413 19:25:14.683079 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjwv\" (UniqueName: \"kubernetes.io/projected/3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0-kube-api-access-wvjwv\") pod \"whisker-869cbc8dd4-sldvw\" (UID: \"3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0\") " pod="calico-system/whisker-869cbc8dd4-sldvw" Apr 13 19:25:14.847427 containerd[1924]: time="2026-04-13T19:25:14.845461992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869cbc8dd4-sldvw,Uid:3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:15.256931 systemd-networkd[1842]: cali84e0569d778: Link UP Apr 13 19:25:15.261005 systemd-networkd[1842]: cali84e0569d778: Gained carrier Apr 13 19:25:15.269016 (udev-worker)[4745]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:14.985 [ERROR][4717] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.024 [INFO][4717] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0 whisker-869cbc8dd4- calico-system 3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0 958 0 2026-04-13 19:25:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:869cbc8dd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-12 whisker-869cbc8dd4-sldvw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali84e0569d778 [] [] }} ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.025 [INFO][4717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.135 [INFO][4732] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" HandleID="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Workload="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.166 [INFO][4732] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" HandleID="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Workload="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ea140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"whisker-869cbc8dd4-sldvw", "timestamp":"2026-04-13 19:25:15.135495574 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004f3760)} Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.166 [INFO][4732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.167 [INFO][4732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.167 [INFO][4732] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.170 [INFO][4732] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.179 [INFO][4732] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.187 [INFO][4732] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.192 [INFO][4732] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.197 [INFO][4732] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.198 [INFO][4732] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.201 [INFO][4732] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.209 [INFO][4732] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.219 [INFO][4732] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.193/26] block=192.168.64.192/26 handle="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.220 [INFO][4732] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.193/26] handle="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" host="ip-172-31-19-12" Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.220 [INFO][4732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:15.321643 containerd[1924]: 2026-04-13 19:25:15.220 [INFO][4732] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.193/26] IPv6=[] ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" HandleID="k8s-pod-network.f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Workload="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" Apr 13 19:25:15.323099 containerd[1924]: 2026-04-13 19:25:15.226 [INFO][4717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0", GenerateName:"whisker-869cbc8dd4-", Namespace:"calico-system", SelfLink:"", UID:"3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"869cbc8dd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"whisker-869cbc8dd4-sldvw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84e0569d778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.323099 containerd[1924]: 2026-04-13 19:25:15.227 [INFO][4717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.193/32] ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" Apr 13 19:25:15.323099 containerd[1924]: 2026-04-13 19:25:15.227 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84e0569d778 ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" Apr 13 19:25:15.323099 containerd[1924]: 2026-04-13 19:25:15.265 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" Apr 13 19:25:15.323099 containerd[1924]: 2026-04-13 19:25:15.267 [INFO][4717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0", GenerateName:"whisker-869cbc8dd4-", Namespace:"calico-system", SelfLink:"", UID:"3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"869cbc8dd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f", Pod:"whisker-869cbc8dd4-sldvw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84e0569d778", MAC:"36:19:c5:a6:64:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.323099 containerd[1924]: 2026-04-13 19:25:15.312 [INFO][4717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f" Namespace="calico-system" Pod="whisker-869cbc8dd4-sldvw" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--869cbc8dd4--sldvw-eth0" Apr 13 19:25:15.383843 containerd[1924]: time="2026-04-13T19:25:15.376321535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:15.383843 containerd[1924]: time="2026-04-13T19:25:15.376407635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:15.383843 containerd[1924]: time="2026-04-13T19:25:15.376483499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.383843 containerd[1924]: time="2026-04-13T19:25:15.376648499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.470068 systemd[1]: Started cri-containerd-f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f.scope - libcontainer container f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f. Apr 13 19:25:15.560312 containerd[1924]: time="2026-04-13T19:25:15.560242380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869cbc8dd4-sldvw,Uid:3216bf4d-0c0f-4bbc-8f19-478eea9ec5a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f\"" Apr 13 19:25:15.564684 containerd[1924]: time="2026-04-13T19:25:15.564345888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 19:25:15.938949 kubelet[3342]: I0413 19:25:15.938895 3342 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075d2244-7605-481e-bd76-956a508f7aee" path="/var/lib/kubelet/pods/075d2244-7605-481e-bd76-956a508f7aee/volumes" Apr 13 19:25:16.930089 systemd-networkd[1842]: cali84e0569d778: Gained IPv6LL Apr 13 19:25:17.521199 kubelet[3342]: I0413 19:25:17.521135 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:18.836782 kernel: calico-node[4863]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 19:25:19.561087 (udev-worker)[4892]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:19.581977 systemd-networkd[1842]: vxlan.calico: Link UP Apr 13 19:25:19.581998 systemd-networkd[1842]: vxlan.calico: Gained carrier Apr 13 19:25:19.642882 (udev-worker)[4897]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:20.532286 containerd[1924]: time="2026-04-13T19:25:20.532203436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.535303 containerd[1924]: time="2026-04-13T19:25:20.534963688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 19:25:20.537423 containerd[1924]: time="2026-04-13T19:25:20.537350056Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.544144 containerd[1924]: time="2026-04-13T19:25:20.543900892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.547130 containerd[1924]: time="2026-04-13T19:25:20.546986632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 4.982561964s" Apr 13 19:25:20.547130 containerd[1924]: time="2026-04-13T19:25:20.547073104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 19:25:20.558215 containerd[1924]: time="2026-04-13T19:25:20.557898797Z" level=info msg="CreateContainer within sandbox \"f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 19:25:20.596756 containerd[1924]: time="2026-04-13T19:25:20.596676557Z" level=info msg="CreateContainer within sandbox \"f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"73f0f635b12dd487c0f90ea41704376b72f7ddd05f6e90495ad9ce167d855aac\"" Apr 13 19:25:20.598799 containerd[1924]: time="2026-04-13T19:25:20.598161833Z" level=info msg="StartContainer for \"73f0f635b12dd487c0f90ea41704376b72f7ddd05f6e90495ad9ce167d855aac\"" Apr 13 19:25:20.662895 systemd[1]: Started cri-containerd-73f0f635b12dd487c0f90ea41704376b72f7ddd05f6e90495ad9ce167d855aac.scope - libcontainer container 73f0f635b12dd487c0f90ea41704376b72f7ddd05f6e90495ad9ce167d855aac. Apr 13 19:25:20.732265 containerd[1924]: time="2026-04-13T19:25:20.732211433Z" level=info msg="StartContainer for \"73f0f635b12dd487c0f90ea41704376b72f7ddd05f6e90495ad9ce167d855aac\" returns successfully" Apr 13 19:25:20.738957 containerd[1924]: time="2026-04-13T19:25:20.738413885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 19:25:21.282068 systemd-networkd[1842]: vxlan.calico: Gained IPv6LL Apr 13 19:25:22.934934 containerd[1924]: time="2026-04-13T19:25:22.934881176Z" level=info msg="StopPodSandbox for \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\"" Apr 13 19:25:22.968433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017867621.mount: Deactivated successfully. Apr 13 19:25:23.019641 containerd[1924]: time="2026-04-13T19:25:23.019571501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:23.022565 containerd[1924]: time="2026-04-13T19:25:23.022468025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 19:25:23.025844 containerd[1924]: time="2026-04-13T19:25:23.025772789Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:23.031849 containerd[1924]: time="2026-04-13T19:25:23.031787465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:23.035916 containerd[1924]: time="2026-04-13T19:25:23.035617493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.297134824s" Apr 13 19:25:23.035916 containerd[1924]: time="2026-04-13T19:25:23.035687321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 19:25:23.050653 containerd[1924]: time="2026-04-13T19:25:23.050581205Z" level=info msg="CreateContainer within sandbox \"f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 19:25:23.090201 containerd[1924]: time="2026-04-13T19:25:23.088910225Z" level=info msg="CreateContainer within sandbox \"f9ced8d194dcc9bdaf763f282a0a09abb89b5b96dc4f69d8430e23b7633fae2f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"71f528b27d5e1ecf60e1ce32fd63b0abc4c4620ef02effb6ace3632acded04b7\"" Apr 13 19:25:23.092986 containerd[1924]: time="2026-04-13T19:25:23.092319197Z" level=info msg="StartContainer for \"71f528b27d5e1ecf60e1ce32fd63b0abc4c4620ef02effb6ace3632acded04b7\"" Apr 13 19:25:23.178050 systemd[1]: Started cri-containerd-71f528b27d5e1ecf60e1ce32fd63b0abc4c4620ef02effb6ace3632acded04b7.scope - libcontainer container 71f528b27d5e1ecf60e1ce32fd63b0abc4c4620ef02effb6ace3632acded04b7. Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.068 [INFO][5016] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.069 [INFO][5016] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" iface="eth0" netns="/var/run/netns/cni-a5b7f8a9-aaf6-c765-13ce-73dbe55ade41" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.073 [INFO][5016] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" iface="eth0" netns="/var/run/netns/cni-a5b7f8a9-aaf6-c765-13ce-73dbe55ade41" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.075 [INFO][5016] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" iface="eth0" netns="/var/run/netns/cni-a5b7f8a9-aaf6-c765-13ce-73dbe55ade41" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.076 [INFO][5016] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.076 [INFO][5016] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.153 [INFO][5028] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.154 [INFO][5028] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.154 [INFO][5028] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.179 [WARNING][5028] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.179 [INFO][5028] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.183 [INFO][5028] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.215396 containerd[1924]: 2026-04-13 19:25:23.195 [INFO][5016] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:23.217759 containerd[1924]: time="2026-04-13T19:25:23.217290270Z" level=info msg="TearDown network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\" successfully" Apr 13 19:25:23.217759 containerd[1924]: time="2026-04-13T19:25:23.217483674Z" level=info msg="StopPodSandbox for \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\" returns successfully" Apr 13 19:25:23.229311 containerd[1924]: time="2026-04-13T19:25:23.229105182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8wg88,Uid:4abfcee3-399e-4a75-8885-c3e8ec391b58,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:23.282117 containerd[1924]: time="2026-04-13T19:25:23.282046746Z" level=info msg="StartContainer for \"71f528b27d5e1ecf60e1ce32fd63b0abc4c4620ef02effb6ace3632acded04b7\" returns successfully" Apr 13 19:25:23.506052 systemd-networkd[1842]: calid82583c3372: Link UP Apr 13 19:25:23.506499 systemd-networkd[1842]: calid82583c3372: Gained carrier Apr 13 19:25:23.516886 (udev-worker)[5091]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:23.531577 kubelet[3342]: I0413 19:25:23.529434 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-869cbc8dd4-sldvw" podStartSLOduration=2.053902918 podStartE2EDuration="9.529407403s" podCreationTimestamp="2026-04-13 19:25:14 +0000 UTC" firstStartedPulling="2026-04-13 19:25:15.562828632 +0000 UTC m=+51.899534203" lastFinishedPulling="2026-04-13 19:25:23.038333129 +0000 UTC m=+59.375038688" observedRunningTime="2026-04-13 19:25:23.455140651 +0000 UTC m=+59.791846246" watchObservedRunningTime="2026-04-13 19:25:23.529407403 +0000 UTC m=+59.866113010" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.346 [INFO][5059] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0 goldmane-cccfbd5cf- calico-system 4abfcee3-399e-4a75-8885-c3e8ec391b58 993 0 2026-04-13 19:24:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-12 goldmane-cccfbd5cf-8wg88 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid82583c3372 [] [] }} ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.347 [INFO][5059] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.389 [INFO][5083] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" HandleID="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.407 [INFO][5083] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" HandleID="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"goldmane-cccfbd5cf-8wg88", "timestamp":"2026-04-13 19:25:23.389681023 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b3080)} Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.407 [INFO][5083] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.408 [INFO][5083] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.408 [INFO][5083] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.416 [INFO][5083] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.440 [INFO][5083] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.450 [INFO][5083] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.459 [INFO][5083] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.468 [INFO][5083] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.470 [INFO][5083] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.473 [INFO][5083] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24 Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.480 [INFO][5083] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.492 [INFO][5083] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.194/26] block=192.168.64.192/26 handle="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.493 [INFO][5083] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.194/26] handle="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" host="ip-172-31-19-12" Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.493 [INFO][5083] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.543962 containerd[1924]: 2026-04-13 19:25:23.493 [INFO][5083] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.194/26] IPv6=[] ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" HandleID="k8s-pod-network.4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.547024 containerd[1924]: 2026-04-13 19:25:23.499 [INFO][5059] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"4abfcee3-399e-4a75-8885-c3e8ec391b58", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"goldmane-cccfbd5cf-8wg88", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid82583c3372", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.547024 containerd[1924]: 2026-04-13 19:25:23.499 [INFO][5059] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.194/32] ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.547024 containerd[1924]: 2026-04-13 19:25:23.499 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid82583c3372 ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.547024 containerd[1924]: 2026-04-13 19:25:23.503 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.547024 containerd[1924]: 2026-04-13 19:25:23.504 [INFO][5059] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"4abfcee3-399e-4a75-8885-c3e8ec391b58", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24", Pod:"goldmane-cccfbd5cf-8wg88", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid82583c3372", MAC:"56:86:12:d4:e1:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.547024 containerd[1924]: 2026-04-13 19:25:23.530 [INFO][5059] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8wg88" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:23.593555 containerd[1924]: time="2026-04-13T19:25:23.593243888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:23.593555 containerd[1924]: time="2026-04-13T19:25:23.593356568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:23.593555 containerd[1924]: time="2026-04-13T19:25:23.593427380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:23.595079 containerd[1924]: time="2026-04-13T19:25:23.593633216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:23.609624 systemd[1]: run-netns-cni\x2da5b7f8a9\x2daaf6\x2dc765\x2d13ce\x2d73dbe55ade41.mount: Deactivated successfully. Apr 13 19:25:23.672150 systemd[1]: Started cri-containerd-4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24.scope - libcontainer container 4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24. Apr 13 19:25:23.754289 containerd[1924]: time="2026-04-13T19:25:23.753906416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8wg88,Uid:4abfcee3-399e-4a75-8885-c3e8ec391b58,Namespace:calico-system,Attempt:1,} returns sandbox id \"4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24\"" Apr 13 19:25:23.759272 containerd[1924]: time="2026-04-13T19:25:23.759103940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 19:25:23.937015 containerd[1924]: time="2026-04-13T19:25:23.936377013Z" level=info msg="StopPodSandbox for \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\"" Apr 13 19:25:23.940201 containerd[1924]: time="2026-04-13T19:25:23.939440169Z" level=info msg="StopPodSandbox for \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\"" Apr 13 19:25:23.941803 containerd[1924]: time="2026-04-13T19:25:23.940986717Z" level=info msg="StopPodSandbox for \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\"" Apr 13 19:25:23.976982 containerd[1924]: time="2026-04-13T19:25:23.976911309Z" level=info msg="StopPodSandbox for \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\"" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.099 [INFO][5175] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.105 [INFO][5175] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" iface="eth0" netns="/var/run/netns/cni-28bf7d8f-49d8-e262-204f-3f832af15afb" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.106 [INFO][5175] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" iface="eth0" netns="/var/run/netns/cni-28bf7d8f-49d8-e262-204f-3f832af15afb" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.107 [INFO][5175] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" iface="eth0" netns="/var/run/netns/cni-28bf7d8f-49d8-e262-204f-3f832af15afb" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.107 [INFO][5175] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.107 [INFO][5175] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.255 [INFO][5208] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.255 [INFO][5208] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.256 [INFO][5208] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.298 [WARNING][5208] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.298 [INFO][5208] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.302 [INFO][5208] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.316241 containerd[1924]: 2026-04-13 19:25:24.309 [INFO][5175] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:25:24.318778 containerd[1924]: time="2026-04-13T19:25:24.317658127Z" level=info msg="TearDown network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\" successfully" Apr 13 19:25:24.318778 containerd[1924]: time="2026-04-13T19:25:24.317719759Z" level=info msg="StopPodSandbox for \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\" returns successfully" Apr 13 19:25:24.325038 containerd[1924]: time="2026-04-13T19:25:24.324969895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4q7t4,Uid:ff8ff070-0d4d-4815-8703-aa78cce64b54,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:24.326530 systemd[1]: run-netns-cni\x2d28bf7d8f\x2d49d8\x2de262\x2d204f\x2d3f832af15afb.mount: Deactivated successfully. Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.238 [INFO][5176] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.240 [INFO][5176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" iface="eth0" netns="/var/run/netns/cni-47e9c2d5-384e-3178-e2d0-1c9542980547" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.244 [INFO][5176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" iface="eth0" netns="/var/run/netns/cni-47e9c2d5-384e-3178-e2d0-1c9542980547" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.244 [INFO][5176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" iface="eth0" netns="/var/run/netns/cni-47e9c2d5-384e-3178-e2d0-1c9542980547" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.245 [INFO][5176] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.245 [INFO][5176] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.368 [INFO][5219] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.369 [INFO][5219] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.369 [INFO][5219] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.407 [WARNING][5219] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.408 [INFO][5219] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.417 [INFO][5219] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.443798 containerd[1924]: 2026-04-13 19:25:24.432 [INFO][5176] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:25:24.443798 containerd[1924]: time="2026-04-13T19:25:24.442580444Z" level=info msg="TearDown network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\" successfully" Apr 13 19:25:24.443798 containerd[1924]: time="2026-04-13T19:25:24.442617824Z" level=info msg="StopPodSandbox for \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\" returns successfully" Apr 13 19:25:24.447958 containerd[1924]: time="2026-04-13T19:25:24.447873944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2478b,Uid:d45a62d9-5e0c-4809-865b-4362b930842e,Namespace:kube-system,Attempt:1,}" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.253 [WARNING][5200] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"4abfcee3-399e-4a75-8885-c3e8ec391b58", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24", Pod:"goldmane-cccfbd5cf-8wg88", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid82583c3372", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.253 [INFO][5200] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.253 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" iface="eth0" netns="" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.253 [INFO][5200] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.253 [INFO][5200] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.408 [INFO][5225] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.409 [INFO][5225] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.417 [INFO][5225] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.459 [WARNING][5225] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.460 [INFO][5225] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.467 [INFO][5225] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.486912 containerd[1924]: 2026-04-13 19:25:24.476 [INFO][5200] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:24.489197 containerd[1924]: time="2026-04-13T19:25:24.486948056Z" level=info msg="TearDown network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\" successfully" Apr 13 19:25:24.489197 containerd[1924]: time="2026-04-13T19:25:24.487018796Z" level=info msg="StopPodSandbox for \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\" returns successfully" Apr 13 19:25:24.489197 containerd[1924]: time="2026-04-13T19:25:24.488033588Z" level=info msg="RemovePodSandbox for \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\"" Apr 13 19:25:24.489197 containerd[1924]: time="2026-04-13T19:25:24.488470088Z" level=info msg="Forcibly stopping sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\"" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.291 [INFO][5185] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.292 [INFO][5185] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" iface="eth0" netns="/var/run/netns/cni-d90172a9-b051-85db-733b-22e39d68f47e" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.292 [INFO][5185] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" iface="eth0" netns="/var/run/netns/cni-d90172a9-b051-85db-733b-22e39d68f47e" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.293 [INFO][5185] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" iface="eth0" netns="/var/run/netns/cni-d90172a9-b051-85db-733b-22e39d68f47e" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.293 [INFO][5185] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.293 [INFO][5185] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.451 [INFO][5230] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.452 [INFO][5230] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.467 [INFO][5230] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.496 [WARNING][5230] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.498 [INFO][5230] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.510 [INFO][5230] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.562307 containerd[1924]: 2026-04-13 19:25:24.543 [INFO][5185] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:25:24.564110 containerd[1924]: time="2026-04-13T19:25:24.562871468Z" level=info msg="TearDown network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\" successfully" Apr 13 19:25:24.564232 containerd[1924]: time="2026-04-13T19:25:24.564167600Z" level=info msg="StopPodSandbox for \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\" returns successfully" Apr 13 19:25:24.573034 containerd[1924]: time="2026-04-13T19:25:24.572887268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6444d8f97b-b7bhq,Uid:fced93c3-4c99-4651-a35d-57e6eb8bc151,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:24.615303 systemd[1]: run-netns-cni\x2d47e9c2d5\x2d384e\x2d3178\x2de2d0\x2d1c9542980547.mount: Deactivated successfully. Apr 13 19:25:24.615478 systemd[1]: run-netns-cni\x2dd90172a9\x2db051\x2d85db\x2d733b\x2d22e39d68f47e.mount: Deactivated successfully. Apr 13 19:25:24.937460 containerd[1924]: time="2026-04-13T19:25:24.937402354Z" level=info msg="StopPodSandbox for \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\"" Apr 13 19:25:25.003795 systemd-networkd[1842]: calid82583c3372: Gained IPv6LL Apr 13 19:25:25.036204 systemd-networkd[1842]: cali285457d5f77: Link UP Apr 13 19:25:25.037256 systemd-networkd[1842]: cali285457d5f77: Gained carrier Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.848 [WARNING][5269] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"4abfcee3-399e-4a75-8885-c3e8ec391b58", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24", Pod:"goldmane-cccfbd5cf-8wg88", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid82583c3372", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.849 [INFO][5269] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.849 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" iface="eth0" netns="" Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.849 [INFO][5269] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.849 [INFO][5269] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.994 [INFO][5299] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.996 [INFO][5299] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:24.996 [INFO][5299] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:25.036 [WARNING][5299] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:25.036 [INFO][5299] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" HandleID="k8s-pod-network.070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Workload="ip--172--31--19--12-k8s-goldmane--cccfbd5cf--8wg88-eth0" Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:25.046 [INFO][5299] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.081481 containerd[1924]: 2026-04-13 19:25:25.068 [INFO][5269] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead" Apr 13 19:25:25.084776 containerd[1924]: time="2026-04-13T19:25:25.084262771Z" level=info msg="TearDown network for sandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\" successfully" Apr 13 19:25:25.115157 containerd[1924]: time="2026-04-13T19:25:25.115069375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:25.115854 containerd[1924]: time="2026-04-13T19:25:25.115808959Z" level=info msg="RemovePodSandbox \"070fc12f816419f58c8e35ad24f7908f6f5167c86029213dcc253b732bb8aead\" returns successfully" Apr 13 19:25:25.119819 containerd[1924]: time="2026-04-13T19:25:25.119647723Z" level=info msg="StopPodSandbox for \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\"" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.642 [INFO][5240] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0 calico-apiserver-77999c4d5b- calico-system ff8ff070-0d4d-4815-8703-aa78cce64b54 1008 0 2026-04-13 19:24:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77999c4d5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-12 calico-apiserver-77999c4d5b-4q7t4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali285457d5f77 [] [] }} ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.663 [INFO][5240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.810 [INFO][5280] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" HandleID="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.857 [INFO][5280] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" HandleID="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb370), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"calico-apiserver-77999c4d5b-4q7t4", "timestamp":"2026-04-13 19:25:24.81086857 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000228580)} Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.857 [INFO][5280] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.857 [INFO][5280] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.857 [INFO][5280] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.866 [INFO][5280] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.882 [INFO][5280] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.895 [INFO][5280] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.904 [INFO][5280] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.925 [INFO][5280] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.925 [INFO][5280] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.932 [INFO][5280] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6 Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.953 [INFO][5280] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.980 [INFO][5280] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.195/26] block=192.168.64.192/26 handle="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.981 [INFO][5280] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.195/26] handle="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" host="ip-172-31-19-12" Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.981 [INFO][5280] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.130317 containerd[1924]: 2026-04-13 19:25:24.981 [INFO][5280] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.195/26] IPv6=[] ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" HandleID="k8s-pod-network.f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:25.132631 containerd[1924]: 2026-04-13 19:25:24.991 [INFO][5240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"ff8ff070-0d4d-4815-8703-aa78cce64b54", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"calico-apiserver-77999c4d5b-4q7t4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali285457d5f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.132631 containerd[1924]: 2026-04-13 19:25:24.991 [INFO][5240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.195/32] ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:25.132631 containerd[1924]: 2026-04-13 19:25:24.991 [INFO][5240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali285457d5f77 ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:25.132631 containerd[1924]: 2026-04-13 19:25:25.046 [INFO][5240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:25.132631 containerd[1924]: 2026-04-13 19:25:25.052 [INFO][5240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"ff8ff070-0d4d-4815-8703-aa78cce64b54", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6", Pod:"calico-apiserver-77999c4d5b-4q7t4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali285457d5f77", MAC:"7e:54:93:de:e8:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.132631 containerd[1924]: 2026-04-13 19:25:25.096 [INFO][5240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4q7t4" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:25:25.339823 systemd-networkd[1842]: calie210cc9f1ac: Link UP Apr 13 19:25:25.351081 systemd-networkd[1842]: calie210cc9f1ac: Gained carrier Apr 13 19:25:25.395825 containerd[1924]: time="2026-04-13T19:25:25.391579545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:25.395825 containerd[1924]: time="2026-04-13T19:25:25.392588637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:25.395825 containerd[1924]: time="2026-04-13T19:25:25.393080349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:25.395825 containerd[1924]: time="2026-04-13T19:25:25.394701705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:24.820 [INFO][5256] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0 coredns-66bc5c9577- kube-system d45a62d9-5e0c-4809-865b-4362b930842e 1010 0 2026-04-13 19:24:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-12 coredns-66bc5c9577-2478b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie210cc9f1ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:24.821 [INFO][5256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.061 [INFO][5301] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" HandleID="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.101 [INFO][5301] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" HandleID="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037d450), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-12", "pod":"coredns-66bc5c9577-2478b", "timestamp":"2026-04-13 19:25:25.061411447 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186580)} Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.101 [INFO][5301] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.101 [INFO][5301] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.102 [INFO][5301] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.115 [INFO][5301] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.133 [INFO][5301] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.153 [INFO][5301] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.164 [INFO][5301] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.185 [INFO][5301] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.185 [INFO][5301] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.191 [INFO][5301] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3 Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.208 [INFO][5301] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.237 [INFO][5301] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.196/26] block=192.168.64.192/26 handle="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.237 [INFO][5301] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.196/26] handle="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" host="ip-172-31-19-12" Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.239 [INFO][5301] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.480379 containerd[1924]: 2026-04-13 19:25:25.239 [INFO][5301] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.196/26] IPv6=[] ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" HandleID="k8s-pod-network.9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:25.484223 containerd[1924]: 2026-04-13 19:25:25.273 [INFO][5256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d45a62d9-5e0c-4809-865b-4362b930842e", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"coredns-66bc5c9577-2478b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie210cc9f1ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.484223 containerd[1924]: 2026-04-13 19:25:25.278 [INFO][5256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.196/32] ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:25.484223 containerd[1924]: 2026-04-13 19:25:25.282 [INFO][5256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie210cc9f1ac ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:25.484223 containerd[1924]: 2026-04-13 19:25:25.352 [INFO][5256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:25.484223 containerd[1924]: 2026-04-13 19:25:25.357 [INFO][5256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d45a62d9-5e0c-4809-865b-4362b930842e", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3", Pod:"coredns-66bc5c9577-2478b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie210cc9f1ac", MAC:"8a:c2:ff:59:c4:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.484223 containerd[1924]: 2026-04-13 19:25:25.455 [INFO][5256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3" Namespace="kube-system" Pod="coredns-66bc5c9577-2478b" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:25:25.569365 systemd[1]: Started cri-containerd-f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6.scope - libcontainer container f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6. Apr 13 19:25:25.639158 systemd-networkd[1842]: cali45490a0a328: Link UP Apr 13 19:25:25.681911 systemd-networkd[1842]: cali45490a0a328: Gained carrier Apr 13 19:25:25.810139 containerd[1924]: time="2026-04-13T19:25:25.807009935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:25.810139 containerd[1924]: time="2026-04-13T19:25:25.807108419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:25.810139 containerd[1924]: time="2026-04-13T19:25:25.807134327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:25.810139 containerd[1924]: time="2026-04-13T19:25:25.807288059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:24.867 [INFO][5274] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0 calico-kube-controllers-6444d8f97b- calico-system fced93c3-4c99-4651-a35d-57e6eb8bc151 1011 0 2026-04-13 19:24:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6444d8f97b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-12 calico-kube-controllers-6444d8f97b-b7bhq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45490a0a328 [] [] }} ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:24.868 [INFO][5274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.144 [INFO][5309] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" HandleID="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.194 [INFO][5309] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" HandleID="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8150), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"calico-kube-controllers-6444d8f97b-b7bhq", "timestamp":"2026-04-13 19:25:25.144527863 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b2000)} Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.194 [INFO][5309] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.239 [INFO][5309] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.241 [INFO][5309] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.263 [INFO][5309] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.282 [INFO][5309] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.334 [INFO][5309] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.345 [INFO][5309] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.362 [INFO][5309] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.362 [INFO][5309] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.452 [INFO][5309] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994 Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.488 [INFO][5309] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.528 [INFO][5309] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.197/26] block=192.168.64.192/26 handle="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.528 [INFO][5309] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.197/26] handle="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" host="ip-172-31-19-12" Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.528 [INFO][5309] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.816986 containerd[1924]: 2026-04-13 19:25:25.528 [INFO][5309] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.197/26] IPv6=[] ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" HandleID="k8s-pod-network.065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:25.820282 containerd[1924]: 2026-04-13 19:25:25.590 [INFO][5274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0", GenerateName:"calico-kube-controllers-6444d8f97b-", Namespace:"calico-system", SelfLink:"", UID:"fced93c3-4c99-4651-a35d-57e6eb8bc151", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6444d8f97b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"calico-kube-controllers-6444d8f97b-b7bhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45490a0a328", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.820282 containerd[1924]: 2026-04-13 19:25:25.595 [INFO][5274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.197/32] ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:25.820282 containerd[1924]: 2026-04-13 19:25:25.595 [INFO][5274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45490a0a328 ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:25.820282 containerd[1924]: 2026-04-13 19:25:25.720 [INFO][5274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:25.820282 containerd[1924]: 2026-04-13 19:25:25.721 [INFO][5274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0", GenerateName:"calico-kube-controllers-6444d8f97b-", Namespace:"calico-system", SelfLink:"", UID:"fced93c3-4c99-4651-a35d-57e6eb8bc151", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6444d8f97b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994", Pod:"calico-kube-controllers-6444d8f97b-b7bhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45490a0a328", MAC:"82:d8:a8:db:21:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.820282 containerd[1924]: 2026-04-13 19:25:25.795 [INFO][5274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994" Namespace="calico-system" Pod="calico-kube-controllers-6444d8f97b-b7bhq" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.327 [INFO][5323] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.334 [INFO][5323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" iface="eth0" netns="/var/run/netns/cni-c82fbfe1-74e4-981a-852e-0818508a3181" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.336 [INFO][5323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" iface="eth0" netns="/var/run/netns/cni-c82fbfe1-74e4-981a-852e-0818508a3181" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.339 [INFO][5323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" iface="eth0" netns="/var/run/netns/cni-c82fbfe1-74e4-981a-852e-0818508a3181" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.341 [INFO][5323] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.344 [INFO][5323] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.782 [INFO][5383] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.787 [INFO][5383] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.794 [INFO][5383] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.948 [WARNING][5383] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.948 [INFO][5383] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.957 [INFO][5383] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:26.013786 containerd[1924]: 2026-04-13 19:25:25.995 [INFO][5323] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:25:26.027546 systemd[1]: run-netns-cni\x2dc82fbfe1\x2d74e4\x2d981a\x2d852e\x2d0818508a3181.mount: Deactivated successfully. Apr 13 19:25:26.045793 containerd[1924]: time="2026-04-13T19:25:26.045311384Z" level=info msg="TearDown network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\" successfully" Apr 13 19:25:26.045793 containerd[1924]: time="2026-04-13T19:25:26.045374396Z" level=info msg="StopPodSandbox for \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\" returns successfully" Apr 13 19:25:26.058269 containerd[1924]: time="2026-04-13T19:25:26.057627488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4qscg,Uid:222d069d-fa76-4e04-b779-eb3d366b1a95,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:26.083588 systemd[1]: Started cri-containerd-9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3.scope - libcontainer container 9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3. Apr 13 19:25:26.093446 systemd[1]: Started sshd@7-172.31.19.12:22-4.175.71.9:34184.service - OpenSSH per-connection server daemon (4.175.71.9:34184). Apr 13 19:25:26.205827 containerd[1924]: time="2026-04-13T19:25:26.203134773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:26.205827 containerd[1924]: time="2026-04-13T19:25:26.203289909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:26.205827 containerd[1924]: time="2026-04-13T19:25:26.203329161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:26.205827 containerd[1924]: time="2026-04-13T19:25:26.203509161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:26.364666 systemd[1]: Started cri-containerd-065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994.scope - libcontainer container 065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994. Apr 13 19:25:26.419252 containerd[1924]: time="2026-04-13T19:25:26.419188222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2478b,Uid:d45a62d9-5e0c-4809-865b-4362b930842e,Namespace:kube-system,Attempt:1,} returns sandbox id \"9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3\"" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:25.920 [WARNING][5354] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:25.921 [INFO][5354] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:25.921 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" iface="eth0" netns="" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:25.921 [INFO][5354] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:25.921 [INFO][5354] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:26.302 [INFO][5446] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:26.310 [INFO][5446] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:26.310 [INFO][5446] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:26.372 [WARNING][5446] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:26.372 [INFO][5446] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:26.380 [INFO][5446] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:26.422745 containerd[1924]: 2026-04-13 19:25:26.403 [INFO][5354] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:26.425047 containerd[1924]: time="2026-04-13T19:25:26.423778006Z" level=info msg="TearDown network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\" successfully" Apr 13 19:25:26.425047 containerd[1924]: time="2026-04-13T19:25:26.423857758Z" level=info msg="StopPodSandbox for \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\" returns successfully" Apr 13 19:25:26.434504 containerd[1924]: time="2026-04-13T19:25:26.432212314Z" level=info msg="RemovePodSandbox for \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\"" Apr 13 19:25:26.437358 containerd[1924]: time="2026-04-13T19:25:26.436890286Z" level=info msg="Forcibly stopping sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\"" Apr 13 19:25:26.459775 containerd[1924]: time="2026-04-13T19:25:26.459079426Z" level=info msg="CreateContainer within sandbox \"9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:26.624699 containerd[1924]: time="2026-04-13T19:25:26.624530663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4q7t4,Uid:ff8ff070-0d4d-4815-8703-aa78cce64b54,Namespace:calico-system,Attempt:1,} returns sandbox id \"f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6\"" Apr 13 19:25:26.696535 containerd[1924]: time="2026-04-13T19:25:26.696432863Z" level=info msg="CreateContainer within sandbox \"9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c536d7ec346dc2e1ad7f3122740671bffc1b7522ccf7691acfbe630da4d23f2c\"" Apr 13 19:25:26.700007 containerd[1924]: time="2026-04-13T19:25:26.698494331Z" level=info msg="StartContainer for \"c536d7ec346dc2e1ad7f3122740671bffc1b7522ccf7691acfbe630da4d23f2c\"" Apr 13 19:25:26.722023 systemd-networkd[1842]: cali285457d5f77: Gained IPv6LL Apr 13 19:25:26.752313 containerd[1924]: time="2026-04-13T19:25:26.750333467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6444d8f97b-b7bhq,Uid:fced93c3-4c99-4651-a35d-57e6eb8bc151,Namespace:calico-system,Attempt:1,} returns sandbox id \"065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994\"" Apr 13 19:25:26.940537 containerd[1924]: time="2026-04-13T19:25:26.939617376Z" level=info msg="StopPodSandbox for \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\"" Apr 13 19:25:26.947873 containerd[1924]: time="2026-04-13T19:25:26.939874500Z" level=info msg="StopPodSandbox for \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\"" Apr 13 19:25:26.975487 systemd[1]: Started cri-containerd-c536d7ec346dc2e1ad7f3122740671bffc1b7522ccf7691acfbe630da4d23f2c.scope - libcontainer container c536d7ec346dc2e1ad7f3122740671bffc1b7522ccf7691acfbe630da4d23f2c. Apr 13 19:25:27.107484 systemd-networkd[1842]: calie210cc9f1ac: Gained IPv6LL Apr 13 19:25:27.186386 containerd[1924]: time="2026-04-13T19:25:27.185674989Z" level=info msg="StartContainer for \"c536d7ec346dc2e1ad7f3122740671bffc1b7522ccf7691acfbe630da4d23f2c\" returns successfully" Apr 13 19:25:27.206999 sshd[5468]: Accepted publickey for core from 4.175.71.9 port 34184 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:27.213904 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:27.232439 systemd-logind[1907]: New session 8 of user core. Apr 13 19:25:27.239401 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:25:27.274945 systemd-networkd[1842]: cali5e28eac9c86: Link UP Apr 13 19:25:27.284815 systemd-networkd[1842]: cali5e28eac9c86: Gained carrier Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:26.674 [INFO][5490] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0 calico-apiserver-77999c4d5b- calico-system 222d069d-fa76-4e04-b779-eb3d366b1a95 1024 0 2026-04-13 19:24:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77999c4d5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-12 calico-apiserver-77999c4d5b-4qscg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5e28eac9c86 [] [] }} ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:26.675 [INFO][5490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:26.968 [INFO][5562] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" HandleID="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.038 [INFO][5562] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" HandleID="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001025d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"calico-apiserver-77999c4d5b-4qscg", "timestamp":"2026-04-13 19:25:26.968182344 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002a0160)} Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.038 [INFO][5562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.039 [INFO][5562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.039 [INFO][5562] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.055 [INFO][5562] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.074 [INFO][5562] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.103 [INFO][5562] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.119 [INFO][5562] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.126 [INFO][5562] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.126 [INFO][5562] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.138 [INFO][5562] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127 Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.161 [INFO][5562] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.194 [INFO][5562] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.198/26] block=192.168.64.192/26 handle="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.195 [INFO][5562] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.198/26] handle="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" host="ip-172-31-19-12" Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.198 [INFO][5562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:27.377873 containerd[1924]: 2026-04-13 19:25:27.198 [INFO][5562] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.198/26] IPv6=[] ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" HandleID="k8s-pod-network.c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:27.379563 containerd[1924]: 2026-04-13 19:25:27.237 [INFO][5490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"222d069d-fa76-4e04-b779-eb3d366b1a95", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"calico-apiserver-77999c4d5b-4qscg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5e28eac9c86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:27.379563 containerd[1924]: 2026-04-13 19:25:27.237 [INFO][5490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.198/32] ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:27.379563 containerd[1924]: 2026-04-13 19:25:27.237 [INFO][5490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e28eac9c86 ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:27.379563 containerd[1924]: 2026-04-13 19:25:27.296 [INFO][5490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:27.379563 containerd[1924]: 2026-04-13 19:25:27.301 [INFO][5490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"222d069d-fa76-4e04-b779-eb3d366b1a95", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127", Pod:"calico-apiserver-77999c4d5b-4qscg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5e28eac9c86", MAC:"d2:76:56:00:ab:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:27.379563 containerd[1924]: 2026-04-13 19:25:27.354 [INFO][5490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127" Namespace="calico-system" Pod="calico-apiserver-77999c4d5b-4qscg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:26.974 [WARNING][5544] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:26.974 [INFO][5544] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:26.975 [INFO][5544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" iface="eth0" netns="" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:26.975 [INFO][5544] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:26.975 [INFO][5544] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:27.282 [INFO][5608] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:27.296 [INFO][5608] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:27.301 [INFO][5608] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:27.341 [WARNING][5608] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:27.342 [INFO][5608] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" HandleID="k8s-pod-network.a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Workload="ip--172--31--19--12-k8s-whisker--7895fb4cdc--sjfzr-eth0" Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:27.352 [INFO][5608] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:27.399348 containerd[1924]: 2026-04-13 19:25:27.382 [INFO][5544] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6" Apr 13 19:25:27.399348 containerd[1924]: time="2026-04-13T19:25:27.398926786Z" level=info msg="TearDown network for sandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\" successfully" Apr 13 19:25:27.418062 containerd[1924]: time="2026-04-13T19:25:27.416826347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:27.418062 containerd[1924]: time="2026-04-13T19:25:27.417334811Z" level=info msg="RemovePodSandbox \"a4836058ef4e6e0f56049fadb580e4f21e474d3727cc7cea7139ecbfd5937cb6\" returns successfully" Apr 13 19:25:27.511665 containerd[1924]: time="2026-04-13T19:25:27.509900771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:27.511665 containerd[1924]: time="2026-04-13T19:25:27.510026015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:27.511665 containerd[1924]: time="2026-04-13T19:25:27.510062447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:27.511665 containerd[1924]: time="2026-04-13T19:25:27.510257975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:27.599925 systemd[1]: run-containerd-runc-k8s.io-c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127-runc.ShhZWt.mount: Deactivated successfully. Apr 13 19:25:27.617109 systemd[1]: Started cri-containerd-c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127.scope - libcontainer container c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127. Apr 13 19:25:27.617922 systemd-networkd[1842]: cali45490a0a328: Gained IPv6LL Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.416 [INFO][5631] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.421 [INFO][5631] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" iface="eth0" netns="/var/run/netns/cni-dc8a4d3c-2968-9279-6143-816015a5393a" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.424 [INFO][5631] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" iface="eth0" netns="/var/run/netns/cni-dc8a4d3c-2968-9279-6143-816015a5393a" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.424 [INFO][5631] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" iface="eth0" netns="/var/run/netns/cni-dc8a4d3c-2968-9279-6143-816015a5393a" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.425 [INFO][5631] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.427 [INFO][5631] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.751 [INFO][5677] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.754 [INFO][5677] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:27.754 [INFO][5677] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:28.034 [WARNING][5677] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:28.039 [INFO][5677] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:28.090 [INFO][5677] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:28.137553 containerd[1924]: 2026-04-13 19:25:28.118 [INFO][5631] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:25:28.150445 systemd[1]: run-netns-cni\x2ddc8a4d3c\x2d2968\x2d9279\x2d6143\x2d816015a5393a.mount: Deactivated successfully. Apr 13 19:25:28.170039 containerd[1924]: time="2026-04-13T19:25:28.169960990Z" level=info msg="TearDown network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\" successfully" Apr 13 19:25:28.170039 containerd[1924]: time="2026-04-13T19:25:28.170024734Z" level=info msg="StopPodSandbox for \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\" returns successfully" Apr 13 19:25:28.188011 containerd[1924]: time="2026-04-13T19:25:28.185270110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c4w5w,Uid:145266ff-892c-4549-b337-19bfa44f9e42,Namespace:kube-system,Attempt:1,}" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:27.884 [INFO][5630] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:27.884 [INFO][5630] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" iface="eth0" netns="/var/run/netns/cni-dd3195f6-6859-fb81-f90c-adf3e83875dc" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:27.885 [INFO][5630] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" iface="eth0" netns="/var/run/netns/cni-dd3195f6-6859-fb81-f90c-adf3e83875dc" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:27.890 [INFO][5630] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" iface="eth0" netns="/var/run/netns/cni-dd3195f6-6859-fb81-f90c-adf3e83875dc" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:27.890 [INFO][5630] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:27.890 [INFO][5630] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:28.035 [INFO][5716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:28.039 [INFO][5716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:28.091 [INFO][5716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:28.213 [WARNING][5716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:28.215 [INFO][5716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:28.259 [INFO][5716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:28.292594 containerd[1924]: 2026-04-13 19:25:28.274 [INFO][5630] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:25:28.302423 systemd[1]: run-netns-cni\x2ddd3195f6\x2d6859\x2dfb81\x2df90c\x2dadf3e83875dc.mount: Deactivated successfully. Apr 13 19:25:28.305559 containerd[1924]: time="2026-04-13T19:25:28.302850215Z" level=info msg="TearDown network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\" successfully" Apr 13 19:25:28.305559 containerd[1924]: time="2026-04-13T19:25:28.302894423Z" level=info msg="StopPodSandbox for \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\" returns successfully" Apr 13 19:25:28.312622 containerd[1924]: time="2026-04-13T19:25:28.311561999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzt8j,Uid:5bbd3846-92e6-469c-993d-c2ef707609bb,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:28.403105 sshd[5468]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:28.421602 systemd[1]: sshd@7-172.31.19.12:22-4.175.71.9:34184.service: Deactivated successfully. Apr 13 19:25:28.429280 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:25:28.434822 systemd-logind[1907]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:25:28.441295 systemd-logind[1907]: Removed session 8. Apr 13 19:25:28.577939 systemd-networkd[1842]: cali5e28eac9c86: Gained IPv6LL Apr 13 19:25:28.718626 kubelet[3342]: I0413 19:25:28.717723 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2478b" podStartSLOduration=58.717700189 podStartE2EDuration="58.717700189s" podCreationTimestamp="2026-04-13 19:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:28.003716985 +0000 UTC m=+64.340422616" watchObservedRunningTime="2026-04-13 19:25:28.717700189 +0000 UTC m=+65.054405748" Apr 13 19:25:28.951865 systemd-networkd[1842]: calif04e5ffa2b9: Link UP Apr 13 19:25:28.956494 systemd-networkd[1842]: calif04e5ffa2b9: Gained carrier Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.396 [INFO][5729] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0 coredns-66bc5c9577- kube-system 145266ff-892c-4549-b337-19bfa44f9e42 1080 0 2026-04-13 19:24:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-12 coredns-66bc5c9577-c4w5w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif04e5ffa2b9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.396 [INFO][5729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.685 [INFO][5753] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" HandleID="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.762 [INFO][5753] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" HandleID="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e9cd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-12", "pod":"coredns-66bc5c9577-c4w5w", "timestamp":"2026-04-13 19:25:28.685026517 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186f20)} Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.762 [INFO][5753] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.762 [INFO][5753] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.763 [INFO][5753] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.779 [INFO][5753] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.842 [INFO][5753] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.861 [INFO][5753] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.867 [INFO][5753] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.872 [INFO][5753] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.872 [INFO][5753] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.880 [INFO][5753] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1 Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.887 [INFO][5753] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.908 [INFO][5753] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.199/26] block=192.168.64.192/26 handle="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.909 [INFO][5753] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.199/26] handle="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" host="ip-172-31-19-12" Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.910 [INFO][5753] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:29.015717 containerd[1924]: 2026-04-13 19:25:28.910 [INFO][5753] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.199/26] IPv6=[] ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" HandleID="k8s-pod-network.4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:29.019023 containerd[1924]: 2026-04-13 19:25:28.921 [INFO][5729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"145266ff-892c-4549-b337-19bfa44f9e42", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"coredns-66bc5c9577-c4w5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif04e5ffa2b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:29.019023 containerd[1924]: 2026-04-13 19:25:28.923 [INFO][5729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.199/32] ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:29.019023 containerd[1924]: 2026-04-13 19:25:28.923 [INFO][5729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif04e5ffa2b9 ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:29.019023 containerd[1924]: 2026-04-13 19:25:28.956 [INFO][5729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:29.019023 containerd[1924]: 2026-04-13 19:25:28.962 [INFO][5729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"145266ff-892c-4549-b337-19bfa44f9e42", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1", Pod:"coredns-66bc5c9577-c4w5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif04e5ffa2b9", MAC:"5a:d9:ec:1c:76:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:29.019023 containerd[1924]: 2026-04-13 19:25:29.001 [INFO][5729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1" Namespace="kube-system" Pod="coredns-66bc5c9577-c4w5w" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:25:29.123715 containerd[1924]: time="2026-04-13T19:25:29.123026627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77999c4d5b-4qscg,Uid:222d069d-fa76-4e04-b779-eb3d366b1a95,Namespace:calico-system,Attempt:1,} returns sandbox id \"c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127\"" Apr 13 19:25:29.165879 systemd-networkd[1842]: cali73636d73cfc: Link UP Apr 13 19:25:29.181136 systemd-networkd[1842]: cali73636d73cfc: Gained carrier Apr 13 19:25:29.244406 containerd[1924]: time="2026-04-13T19:25:29.242614860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:29.244982 containerd[1924]: time="2026-04-13T19:25:29.243838656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:29.244982 containerd[1924]: time="2026-04-13T19:25:29.243917340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:29.244982 containerd[1924]: time="2026-04-13T19:25:29.244144092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.772 [INFO][5740] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0 csi-node-driver- calico-system 5bbd3846-92e6-469c-993d-c2ef707609bb 1083 0 2026-04-13 19:24:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-12 csi-node-driver-nzt8j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali73636d73cfc [] [] }} ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.773 [INFO][5740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.915 [INFO][5771] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" HandleID="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.960 [INFO][5771] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" HandleID="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"csi-node-driver-nzt8j", "timestamp":"2026-04-13 19:25:28.915209834 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400026b600)} Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.960 [INFO][5771] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.960 [INFO][5771] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.960 [INFO][5771] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.968 [INFO][5771] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:28.997 [INFO][5771] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.023 [INFO][5771] ipam/ipam.go 526: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.036 [INFO][5771] ipam/ipam.go 160: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.055 [INFO][5771] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.055 [INFO][5771] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.074 [INFO][5771] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1 Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.091 [INFO][5771] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.118 [INFO][5771] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.64.200/26] block=192.168.64.192/26 handle="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.118 [INFO][5771] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.64.200/26] handle="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" host="ip-172-31-19-12" Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.118 [INFO][5771] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:29.266053 containerd[1924]: 2026-04-13 19:25:29.118 [INFO][5771] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.64.200/26] IPv6=[] ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" HandleID="k8s-pod-network.875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:29.271849 containerd[1924]: 2026-04-13 19:25:29.126 [INFO][5740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5bbd3846-92e6-469c-993d-c2ef707609bb", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"csi-node-driver-nzt8j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73636d73cfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:29.271849 containerd[1924]: 2026-04-13 19:25:29.129 [INFO][5740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.200/32] ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:29.271849 containerd[1924]: 2026-04-13 19:25:29.130 [INFO][5740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73636d73cfc ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:29.271849 containerd[1924]: 2026-04-13 19:25:29.184 [INFO][5740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:29.271849 containerd[1924]: 2026-04-13 19:25:29.191 [INFO][5740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5bbd3846-92e6-469c-993d-c2ef707609bb", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1", Pod:"csi-node-driver-nzt8j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73636d73cfc", MAC:"32:ce:0c:c2:45:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:29.271849 containerd[1924]: 2026-04-13 19:25:29.235 [INFO][5740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1" Namespace="calico-system" Pod="csi-node-driver-nzt8j" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:25:29.363170 systemd[1]: run-containerd-runc-k8s.io-4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1-runc.PvP9LU.mount: Deactivated successfully. Apr 13 19:25:29.374421 systemd[1]: Started cri-containerd-4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1.scope - libcontainer container 4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1. Apr 13 19:25:29.426037 containerd[1924]: time="2026-04-13T19:25:29.422256925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:29.426037 containerd[1924]: time="2026-04-13T19:25:29.422611561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:29.426037 containerd[1924]: time="2026-04-13T19:25:29.422698021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:29.426037 containerd[1924]: time="2026-04-13T19:25:29.422940229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:29.513061 systemd[1]: Started cri-containerd-875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1.scope - libcontainer container 875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1. Apr 13 19:25:29.561271 containerd[1924]: time="2026-04-13T19:25:29.559316317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c4w5w,Uid:145266ff-892c-4549-b337-19bfa44f9e42,Namespace:kube-system,Attempt:1,} returns sandbox id \"4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1\"" Apr 13 19:25:29.586204 containerd[1924]: time="2026-04-13T19:25:29.585993409Z" level=info msg="CreateContainer within sandbox \"4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:29.628694 containerd[1924]: time="2026-04-13T19:25:29.628005998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzt8j,Uid:5bbd3846-92e6-469c-993d-c2ef707609bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1\"" Apr 13 19:25:29.638703 containerd[1924]: time="2026-04-13T19:25:29.637872926Z" level=info msg="CreateContainer within sandbox \"4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e331f07816d2b83fbd71082e530732df78674399e9070dcabaea2dbfb3df6ad1\"" Apr 13 19:25:29.640991 containerd[1924]: time="2026-04-13T19:25:29.639690410Z" level=info msg="StartContainer for \"e331f07816d2b83fbd71082e530732df78674399e9070dcabaea2dbfb3df6ad1\"" Apr 13 19:25:29.725722 systemd[1]: Started cri-containerd-e331f07816d2b83fbd71082e530732df78674399e9070dcabaea2dbfb3df6ad1.scope - libcontainer container e331f07816d2b83fbd71082e530732df78674399e9070dcabaea2dbfb3df6ad1. Apr 13 19:25:29.826846 containerd[1924]: time="2026-04-13T19:25:29.824950827Z" level=info msg="StartContainer for \"e331f07816d2b83fbd71082e530732df78674399e9070dcabaea2dbfb3df6ad1\" returns successfully" Apr 13 19:25:30.149442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980220606.mount: Deactivated successfully. Apr 13 19:25:30.242310 systemd-networkd[1842]: calif04e5ffa2b9: Gained IPv6LL Apr 13 19:25:30.435913 systemd-networkd[1842]: cali73636d73cfc: Gained IPv6LL Apr 13 19:25:30.695162 containerd[1924]: time="2026-04-13T19:25:30.694534431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:30.701441 containerd[1924]: time="2026-04-13T19:25:30.701382903Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:30.702211 containerd[1924]: time="2026-04-13T19:25:30.701675103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 19:25:30.713360 containerd[1924]: time="2026-04-13T19:25:30.713259303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:30.719897 containerd[1924]: time="2026-04-13T19:25:30.718083699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 6.958913219s" Apr 13 19:25:30.719897 containerd[1924]: time="2026-04-13T19:25:30.718152999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 19:25:30.721156 containerd[1924]: time="2026-04-13T19:25:30.720312627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:25:30.729650 containerd[1924]: time="2026-04-13T19:25:30.729578331Z" level=info msg="CreateContainer within sandbox \"4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 19:25:30.784014 kubelet[3342]: I0413 19:25:30.783278 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c4w5w" podStartSLOduration=60.783256959 podStartE2EDuration="1m0.783256959s" podCreationTimestamp="2026-04-13 19:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:30.744131571 +0000 UTC m=+67.080837178" watchObservedRunningTime="2026-04-13 19:25:30.783256959 +0000 UTC m=+67.119962530" Apr 13 19:25:30.829512 containerd[1924]: time="2026-04-13T19:25:30.829449832Z" level=info msg="CreateContainer within sandbox \"4912eba8e5e93c1209d0e1f1aaa8e33eec3beab2c68f09555b9b6d4a3d2f0f24\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"57b79d6971537387067d4afa605a8921bf526a43c483b5d294977f92215a151a\"" Apr 13 19:25:30.831005 containerd[1924]: time="2026-04-13T19:25:30.830942872Z" level=info msg="StartContainer for \"57b79d6971537387067d4afa605a8921bf526a43c483b5d294977f92215a151a\"" Apr 13 19:25:30.939044 systemd[1]: Started cri-containerd-57b79d6971537387067d4afa605a8921bf526a43c483b5d294977f92215a151a.scope - libcontainer container 57b79d6971537387067d4afa605a8921bf526a43c483b5d294977f92215a151a. Apr 13 19:25:31.028712 containerd[1924]: time="2026-04-13T19:25:31.028542709Z" level=info msg="StartContainer for \"57b79d6971537387067d4afa605a8921bf526a43c483b5d294977f92215a151a\" returns successfully" Apr 13 19:25:32.692844 ntpd[1900]: Listen normally on 8 vxlan.calico 192.168.64.192:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 8 vxlan.calico 192.168.64.192:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 9 cali84e0569d778 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 10 vxlan.calico [fe80::641d:45ff:febc:3514%5]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 11 calid82583c3372 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 12 cali285457d5f77 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 13 calie210cc9f1ac [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 14 cali45490a0a328 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 15 cali5e28eac9c86 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 16 calif04e5ffa2b9 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 13 19:25:32.694936 ntpd[1900]: 13 Apr 19:25:32 ntpd[1900]: Listen normally on 17 cali73636d73cfc [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 19:25:32.692969 ntpd[1900]: Listen normally on 9 cali84e0569d778 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 19:25:32.693049 ntpd[1900]: Listen normally on 10 vxlan.calico [fe80::641d:45ff:febc:3514%5]:123 Apr 13 19:25:32.693116 ntpd[1900]: Listen normally on 11 calid82583c3372 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 19:25:32.693185 ntpd[1900]: Listen normally on 12 cali285457d5f77 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 19:25:32.693251 ntpd[1900]: Listen normally on 13 calie210cc9f1ac [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 19:25:32.693340 ntpd[1900]: Listen normally on 14 cali45490a0a328 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 19:25:32.693434 ntpd[1900]: Listen normally on 15 cali5e28eac9c86 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 13 19:25:32.693502 ntpd[1900]: Listen normally on 16 calif04e5ffa2b9 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 13 19:25:32.693580 ntpd[1900]: Listen normally on 17 cali73636d73cfc [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 19:25:33.590267 systemd[1]: Started sshd@8-172.31.19.12:22-4.175.71.9:34194.service - OpenSSH per-connection server daemon (4.175.71.9:34194). Apr 13 19:25:34.612227 sshd[6049]: Accepted publickey for core from 4.175.71.9 port 34194 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:34.616094 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:34.628025 systemd-logind[1907]: New session 9 of user core. Apr 13 19:25:34.635014 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:25:35.525046 sshd[6049]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:35.536439 systemd[1]: sshd@8-172.31.19.12:22-4.175.71.9:34194.service: Deactivated successfully. Apr 13 19:25:35.544513 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:25:35.549111 systemd-logind[1907]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:25:35.552484 systemd-logind[1907]: Removed session 9. Apr 13 19:25:36.266991 containerd[1924]: time="2026-04-13T19:25:36.266918527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:36.268586 containerd[1924]: time="2026-04-13T19:25:36.268510675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 19:25:36.269840 containerd[1924]: time="2026-04-13T19:25:36.268997227Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:36.275918 containerd[1924]: time="2026-04-13T19:25:36.275833531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:36.276802 containerd[1924]: time="2026-04-13T19:25:36.276695911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 5.556327364s" Apr 13 19:25:36.276913 containerd[1924]: time="2026-04-13T19:25:36.276814411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:25:36.281820 containerd[1924]: time="2026-04-13T19:25:36.281059291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 19:25:36.286591 containerd[1924]: time="2026-04-13T19:25:36.286158847Z" level=info msg="CreateContainer within sandbox \"f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:25:36.316335 containerd[1924]: time="2026-04-13T19:25:36.316253827Z" level=info msg="CreateContainer within sandbox \"f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"717f2c04df9e66712bd966b23ec4eaf9886d98696f7b846016e3ab67314f3bbb\"" Apr 13 19:25:36.319838 containerd[1924]: time="2026-04-13T19:25:36.319230631Z" level=info msg="StartContainer for \"717f2c04df9e66712bd966b23ec4eaf9886d98696f7b846016e3ab67314f3bbb\"" Apr 13 19:25:36.383077 systemd[1]: Started cri-containerd-717f2c04df9e66712bd966b23ec4eaf9886d98696f7b846016e3ab67314f3bbb.scope - libcontainer container 717f2c04df9e66712bd966b23ec4eaf9886d98696f7b846016e3ab67314f3bbb. Apr 13 19:25:36.455792 containerd[1924]: time="2026-04-13T19:25:36.453548755Z" level=info msg="StartContainer for \"717f2c04df9e66712bd966b23ec4eaf9886d98696f7b846016e3ab67314f3bbb\" returns successfully" Apr 13 19:25:36.761848 kubelet[3342]: I0413 19:25:36.761361 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-8wg88" podStartSLOduration=39.79900385 podStartE2EDuration="46.761320989s" podCreationTimestamp="2026-04-13 19:24:50 +0000 UTC" firstStartedPulling="2026-04-13 19:25:23.757647056 +0000 UTC m=+60.094352675" lastFinishedPulling="2026-04-13 19:25:30.719964195 +0000 UTC m=+67.056669814" observedRunningTime="2026-04-13 19:25:31.777605176 +0000 UTC m=+68.114310735" watchObservedRunningTime="2026-04-13 19:25:36.761320989 +0000 UTC m=+73.098026560" Apr 13 19:25:37.743981 kubelet[3342]: I0413 19:25:37.741194 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:39.449014 containerd[1924]: time="2026-04-13T19:25:39.448955962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:39.456955 containerd[1924]: time="2026-04-13T19:25:39.456325162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 19:25:39.463305 containerd[1924]: time="2026-04-13T19:25:39.461802346Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:39.472619 containerd[1924]: time="2026-04-13T19:25:39.472529830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:39.476507 containerd[1924]: time="2026-04-13T19:25:39.474459874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.193172115s" Apr 13 19:25:39.476507 containerd[1924]: time="2026-04-13T19:25:39.474512782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 19:25:39.478495 containerd[1924]: time="2026-04-13T19:25:39.478354630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:25:39.519902 containerd[1924]: time="2026-04-13T19:25:39.519626363Z" level=info msg="CreateContainer within sandbox \"065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 19:25:39.577316 containerd[1924]: time="2026-04-13T19:25:39.577123799Z" level=info msg="CreateContainer within sandbox \"065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5b4abfcad99ba0d10d100742b9b309f953b6ea327f43e0c46b71b27eea98c926\"" Apr 13 19:25:39.580893 containerd[1924]: time="2026-04-13T19:25:39.579674207Z" level=info msg="StartContainer for \"5b4abfcad99ba0d10d100742b9b309f953b6ea327f43e0c46b71b27eea98c926\"" Apr 13 19:25:39.664217 systemd[1]: Started cri-containerd-5b4abfcad99ba0d10d100742b9b309f953b6ea327f43e0c46b71b27eea98c926.scope - libcontainer container 5b4abfcad99ba0d10d100742b9b309f953b6ea327f43e0c46b71b27eea98c926. Apr 13 19:25:39.748426 containerd[1924]: time="2026-04-13T19:25:39.748262376Z" level=info msg="StartContainer for \"5b4abfcad99ba0d10d100742b9b309f953b6ea327f43e0c46b71b27eea98c926\" returns successfully" Apr 13 19:25:39.792781 kubelet[3342]: I0413 19:25:39.791471 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-77999c4d5b-4q7t4" podStartSLOduration=41.144485604 podStartE2EDuration="50.791448924s" podCreationTimestamp="2026-04-13 19:24:49 +0000 UTC" firstStartedPulling="2026-04-13 19:25:26.632135255 +0000 UTC m=+62.968840826" lastFinishedPulling="2026-04-13 19:25:36.279098587 +0000 UTC m=+72.615804146" observedRunningTime="2026-04-13 19:25:36.763656369 +0000 UTC m=+73.100361976" watchObservedRunningTime="2026-04-13 19:25:39.791448924 +0000 UTC m=+76.128154495" Apr 13 19:25:39.874392 containerd[1924]: time="2026-04-13T19:25:39.874291920Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:39.876852 containerd[1924]: time="2026-04-13T19:25:39.876559296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 19:25:39.883117 containerd[1924]: time="2026-04-13T19:25:39.883048392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 404.544506ms" Apr 13 19:25:39.883117 containerd[1924]: time="2026-04-13T19:25:39.883115472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:25:39.886653 containerd[1924]: time="2026-04-13T19:25:39.886369789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 19:25:39.894672 containerd[1924]: time="2026-04-13T19:25:39.894601489Z" level=info msg="CreateContainer within sandbox \"c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:25:39.936619 containerd[1924]: time="2026-04-13T19:25:39.936387805Z" level=info msg="CreateContainer within sandbox \"c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ec9659d17ff66ae16e814edcec0f470f0c183f13eba218868e21cb8d8f3224d\"" Apr 13 19:25:39.937858 containerd[1924]: time="2026-04-13T19:25:39.937789981Z" level=info msg="StartContainer for \"6ec9659d17ff66ae16e814edcec0f470f0c183f13eba218868e21cb8d8f3224d\"" Apr 13 19:25:40.014058 systemd[1]: Started cri-containerd-6ec9659d17ff66ae16e814edcec0f470f0c183f13eba218868e21cb8d8f3224d.scope - libcontainer container 6ec9659d17ff66ae16e814edcec0f470f0c183f13eba218868e21cb8d8f3224d. Apr 13 19:25:40.146783 containerd[1924]: time="2026-04-13T19:25:40.146496706Z" level=info msg="StartContainer for \"6ec9659d17ff66ae16e814edcec0f470f0c183f13eba218868e21cb8d8f3224d\" returns successfully" Apr 13 19:25:40.704273 systemd[1]: Started sshd@9-172.31.19.12:22-4.175.71.9:59428.service - OpenSSH per-connection server daemon (4.175.71.9:59428). Apr 13 19:25:40.807036 kubelet[3342]: I0413 19:25:40.806716 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6444d8f97b-b7bhq" podStartSLOduration=36.091786294 podStartE2EDuration="48.806168089s" podCreationTimestamp="2026-04-13 19:24:52 +0000 UTC" firstStartedPulling="2026-04-13 19:25:26.763127759 +0000 UTC m=+63.099833330" lastFinishedPulling="2026-04-13 19:25:39.477509482 +0000 UTC m=+75.814215125" observedRunningTime="2026-04-13 19:25:39.791318772 +0000 UTC m=+76.128024343" watchObservedRunningTime="2026-04-13 19:25:40.806168089 +0000 UTC m=+77.142874032" Apr 13 19:25:40.814651 kubelet[3342]: I0413 19:25:40.811153 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-77999c4d5b-4qscg" podStartSLOduration=41.057088247 podStartE2EDuration="51.811104985s" podCreationTimestamp="2026-04-13 19:24:49 +0000 UTC" firstStartedPulling="2026-04-13 19:25:29.132152855 +0000 UTC m=+65.468858426" lastFinishedPulling="2026-04-13 19:25:39.886169605 +0000 UTC m=+76.222875164" observedRunningTime="2026-04-13 19:25:40.796654657 +0000 UTC m=+77.133360300" watchObservedRunningTime="2026-04-13 19:25:40.811104985 +0000 UTC m=+77.147810628" Apr 13 19:25:41.492776 containerd[1924]: time="2026-04-13T19:25:41.490977708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:41.492776 containerd[1924]: time="2026-04-13T19:25:41.492387900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 19:25:41.495789 containerd[1924]: time="2026-04-13T19:25:41.494095656Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:41.499590 containerd[1924]: time="2026-04-13T19:25:41.499522225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:41.502446 containerd[1924]: time="2026-04-13T19:25:41.501617125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.615189244s" Apr 13 19:25:41.502647 containerd[1924]: time="2026-04-13T19:25:41.502616617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 19:25:41.512577 containerd[1924]: time="2026-04-13T19:25:41.512515693Z" level=info msg="CreateContainer within sandbox \"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 19:25:41.549023 containerd[1924]: time="2026-04-13T19:25:41.548952289Z" level=info msg="CreateContainer within sandbox \"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"54ed626dfcdb61c849af1343070ca8f9152fe63a156773a004d451e6b276cdff\"" Apr 13 19:25:41.550443 containerd[1924]: time="2026-04-13T19:25:41.550367293Z" level=info msg="StartContainer for \"54ed626dfcdb61c849af1343070ca8f9152fe63a156773a004d451e6b276cdff\"" Apr 13 19:25:41.657115 systemd[1]: Started cri-containerd-54ed626dfcdb61c849af1343070ca8f9152fe63a156773a004d451e6b276cdff.scope - libcontainer container 54ed626dfcdb61c849af1343070ca8f9152fe63a156773a004d451e6b276cdff. Apr 13 19:25:41.756149 sshd[6248]: Accepted publickey for core from 4.175.71.9 port 59428 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:41.762646 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:41.770538 containerd[1924]: time="2026-04-13T19:25:41.769390538Z" level=info msg="StartContainer for \"54ed626dfcdb61c849af1343070ca8f9152fe63a156773a004d451e6b276cdff\" returns successfully" Apr 13 19:25:41.778148 containerd[1924]: time="2026-04-13T19:25:41.778086698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 19:25:41.788613 systemd-logind[1907]: New session 10 of user core. Apr 13 19:25:41.794042 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:25:41.797784 kubelet[3342]: I0413 19:25:41.796827 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:42.641868 sshd[6248]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:42.652120 systemd[1]: sshd@9-172.31.19.12:22-4.175.71.9:59428.service: Deactivated successfully. Apr 13 19:25:42.660479 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:25:42.662504 systemd-logind[1907]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:25:42.665786 systemd-logind[1907]: Removed session 10. Apr 13 19:25:43.644044 systemd[1]: run-containerd-runc-k8s.io-c5a8d344594749a0a85d2b1769031fe2debd9af1247b8ea58d90a841018581d4-runc.h8B5JH.mount: Deactivated successfully. Apr 13 19:25:43.734461 containerd[1924]: time="2026-04-13T19:25:43.734377168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:43.736305 containerd[1924]: time="2026-04-13T19:25:43.736198240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 19:25:43.739791 containerd[1924]: time="2026-04-13T19:25:43.739696480Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:43.747522 containerd[1924]: time="2026-04-13T19:25:43.747426496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:43.752543 containerd[1924]: time="2026-04-13T19:25:43.751468456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.97284339s" Apr 13 19:25:43.752543 containerd[1924]: time="2026-04-13T19:25:43.751541212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 19:25:43.762231 containerd[1924]: time="2026-04-13T19:25:43.762176704Z" level=info msg="CreateContainer within sandbox \"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 19:25:43.795978 containerd[1924]: time="2026-04-13T19:25:43.795897484Z" level=info msg="CreateContainer within sandbox \"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4475e553d94814ad53007b0f5ae253e61f2ba0a31cdf65589fa1f02e2abfe63c\"" Apr 13 19:25:43.797804 containerd[1924]: time="2026-04-13T19:25:43.797099740Z" level=info msg="StartContainer for \"4475e553d94814ad53007b0f5ae253e61f2ba0a31cdf65589fa1f02e2abfe63c\"" Apr 13 19:25:43.899602 systemd[1]: Started cri-containerd-4475e553d94814ad53007b0f5ae253e61f2ba0a31cdf65589fa1f02e2abfe63c.scope - libcontainer container 4475e553d94814ad53007b0f5ae253e61f2ba0a31cdf65589fa1f02e2abfe63c. Apr 13 19:25:43.976536 containerd[1924]: time="2026-04-13T19:25:43.976275125Z" level=info msg="StartContainer for \"4475e553d94814ad53007b0f5ae253e61f2ba0a31cdf65589fa1f02e2abfe63c\" returns successfully" Apr 13 19:25:44.230719 kubelet[3342]: I0413 19:25:44.230561 3342 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 19:25:44.230719 kubelet[3342]: I0413 19:25:44.230654 3342 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 19:25:47.823512 systemd[1]: Started sshd@10-172.31.19.12:22-4.175.71.9:60918.service - OpenSSH per-connection server daemon (4.175.71.9:60918). Apr 13 19:25:48.875764 sshd[6409]: Accepted publickey for core from 4.175.71.9 port 60918 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:48.879607 sshd[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:48.887502 systemd-logind[1907]: New session 11 of user core. Apr 13 19:25:48.897012 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:25:49.731097 sshd[6409]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:49.738390 systemd[1]: sshd@10-172.31.19.12:22-4.175.71.9:60918.service: Deactivated successfully. Apr 13 19:25:49.743106 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:25:49.744807 systemd-logind[1907]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:25:49.746476 systemd-logind[1907]: Removed session 11. Apr 13 19:25:49.907272 systemd[1]: Started sshd@11-172.31.19.12:22-4.175.71.9:60926.service - OpenSSH per-connection server daemon (4.175.71.9:60926). Apr 13 19:25:50.915669 sshd[6444]: Accepted publickey for core from 4.175.71.9 port 60926 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:50.918398 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:50.930121 systemd-logind[1907]: New session 12 of user core. Apr 13 19:25:50.937046 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:25:51.834512 sshd[6444]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:51.841359 systemd[1]: sshd@11-172.31.19.12:22-4.175.71.9:60926.service: Deactivated successfully. Apr 13 19:25:51.847852 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:25:51.849922 systemd-logind[1907]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:25:51.853025 systemd-logind[1907]: Removed session 12. Apr 13 19:25:52.003354 systemd[1]: Started sshd@12-172.31.19.12:22-4.175.71.9:60936.service - OpenSSH per-connection server daemon (4.175.71.9:60936). Apr 13 19:25:52.994713 sshd[6456]: Accepted publickey for core from 4.175.71.9 port 60936 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:52.997955 sshd[6456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:53.014062 systemd-logind[1907]: New session 13 of user core. Apr 13 19:25:53.021409 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:25:53.792836 sshd[6456]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:53.799436 systemd[1]: sshd@12-172.31.19.12:22-4.175.71.9:60936.service: Deactivated successfully. Apr 13 19:25:53.806111 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:25:53.810226 systemd-logind[1907]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:25:53.813991 systemd-logind[1907]: Removed session 13. Apr 13 19:25:58.989280 systemd[1]: Started sshd@13-172.31.19.12:22-4.175.71.9:49266.service - OpenSSH per-connection server daemon (4.175.71.9:49266). Apr 13 19:26:00.032451 sshd[6472]: Accepted publickey for core from 4.175.71.9 port 49266 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:00.035277 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:00.044067 systemd-logind[1907]: New session 14 of user core. Apr 13 19:26:00.049057 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:26:00.891296 sshd[6472]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:00.898349 systemd-logind[1907]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:26:00.898633 systemd[1]: sshd@13-172.31.19.12:22-4.175.71.9:49266.service: Deactivated successfully. Apr 13 19:26:00.904389 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:26:00.908948 systemd-logind[1907]: Removed session 14. Apr 13 19:26:01.072518 systemd[1]: Started sshd@14-172.31.19.12:22-4.175.71.9:49278.service - OpenSSH per-connection server daemon (4.175.71.9:49278). Apr 13 19:26:02.101939 sshd[6494]: Accepted publickey for core from 4.175.71.9 port 49278 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:02.104762 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:02.114923 systemd-logind[1907]: New session 15 of user core. Apr 13 19:26:02.122143 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:02.770651 systemd[1]: run-containerd-runc-k8s.io-57b79d6971537387067d4afa605a8921bf526a43c483b5d294977f92215a151a-runc.a7M9iL.mount: Deactivated successfully. Apr 13 19:26:02.898191 kubelet[3342]: I0413 19:26:02.895813 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nzt8j" podStartSLOduration=56.772698597 podStartE2EDuration="1m10.895789451s" podCreationTimestamp="2026-04-13 19:24:52 +0000 UTC" firstStartedPulling="2026-04-13 19:25:29.630536042 +0000 UTC m=+65.967241613" lastFinishedPulling="2026-04-13 19:25:43.753626896 +0000 UTC m=+80.090332467" observedRunningTime="2026-04-13 19:25:44.869605517 +0000 UTC m=+81.206311124" watchObservedRunningTime="2026-04-13 19:26:02.895789451 +0000 UTC m=+99.232495370" Apr 13 19:26:03.362716 sshd[6494]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:03.369944 systemd-logind[1907]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:03.370804 systemd[1]: sshd@14-172.31.19.12:22-4.175.71.9:49278.service: Deactivated successfully. Apr 13 19:26:03.377289 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:03.383125 systemd-logind[1907]: Removed session 15. Apr 13 19:26:03.535319 systemd[1]: Started sshd@15-172.31.19.12:22-4.175.71.9:49288.service - OpenSSH per-connection server daemon (4.175.71.9:49288). Apr 13 19:26:04.521802 sshd[6531]: Accepted publickey for core from 4.175.71.9 port 49288 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:04.530064 sshd[6531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:04.544845 systemd-logind[1907]: New session 16 of user core. Apr 13 19:26:04.553124 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:26:05.199838 kubelet[3342]: I0413 19:26:05.197836 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:26:06.297608 sshd[6531]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:06.307975 systemd-logind[1907]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:26:06.311956 systemd[1]: sshd@15-172.31.19.12:22-4.175.71.9:49288.service: Deactivated successfully. Apr 13 19:26:06.317980 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:26:06.321892 systemd-logind[1907]: Removed session 16. Apr 13 19:26:06.488318 systemd[1]: Started sshd@16-172.31.19.12:22-4.175.71.9:39542.service - OpenSSH per-connection server daemon (4.175.71.9:39542). Apr 13 19:26:07.498661 sshd[6558]: Accepted publickey for core from 4.175.71.9 port 39542 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:07.501715 sshd[6558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:07.510771 systemd-logind[1907]: New session 17 of user core. Apr 13 19:26:07.521061 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:26:08.244199 kubelet[3342]: I0413 19:26:08.243933 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:26:08.629454 sshd[6558]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:08.637355 systemd-logind[1907]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:26:08.640784 systemd[1]: sshd@16-172.31.19.12:22-4.175.71.9:39542.service: Deactivated successfully. Apr 13 19:26:08.647485 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:26:08.651159 systemd-logind[1907]: Removed session 17. Apr 13 19:26:08.821317 systemd[1]: Started sshd@17-172.31.19.12:22-4.175.71.9:39550.service - OpenSSH per-connection server daemon (4.175.71.9:39550). Apr 13 19:26:09.838283 sshd[6571]: Accepted publickey for core from 4.175.71.9 port 39550 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:09.840176 sshd[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:09.849586 systemd-logind[1907]: New session 18 of user core. Apr 13 19:26:09.857100 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:26:10.673839 sshd[6571]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:10.682130 systemd-logind[1907]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:26:10.683695 systemd[1]: sshd@17-172.31.19.12:22-4.175.71.9:39550.service: Deactivated successfully. Apr 13 19:26:10.688430 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:26:10.691011 systemd-logind[1907]: Removed session 18. Apr 13 19:26:15.862343 systemd[1]: Started sshd@18-172.31.19.12:22-4.175.71.9:52674.service - OpenSSH per-connection server daemon (4.175.71.9:52674). Apr 13 19:26:16.881245 sshd[6632]: Accepted publickey for core from 4.175.71.9 port 52674 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:16.884271 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:16.893884 systemd-logind[1907]: New session 19 of user core. Apr 13 19:26:16.910112 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:26:17.698371 sshd[6632]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:17.707549 systemd[1]: sshd@18-172.31.19.12:22-4.175.71.9:52674.service: Deactivated successfully. Apr 13 19:26:17.712575 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:26:17.715668 systemd-logind[1907]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:26:17.718639 systemd-logind[1907]: Removed session 19. Apr 13 19:26:22.881471 systemd[1]: Started sshd@19-172.31.19.12:22-4.175.71.9:52680.service - OpenSSH per-connection server daemon (4.175.71.9:52680). Apr 13 19:26:23.891154 sshd[6665]: Accepted publickey for core from 4.175.71.9 port 52680 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:23.893061 sshd[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:23.903101 systemd-logind[1907]: New session 20 of user core. Apr 13 19:26:23.910076 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:26:24.846679 sshd[6665]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:24.858516 systemd-logind[1907]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:26:24.859488 systemd[1]: sshd@19-172.31.19.12:22-4.175.71.9:52680.service: Deactivated successfully. Apr 13 19:26:24.869876 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:26:24.876869 systemd-logind[1907]: Removed session 20. Apr 13 19:26:27.427937 containerd[1924]: time="2026-04-13T19:26:27.427862397Z" level=info msg="StopPodSandbox for \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\"" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.525 [WARNING][6686] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"ff8ff070-0d4d-4815-8703-aa78cce64b54", ResourceVersion:"1375", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6", Pod:"calico-apiserver-77999c4d5b-4q7t4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali285457d5f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.525 [INFO][6686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.526 [INFO][6686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" iface="eth0" netns="" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.526 [INFO][6686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.526 [INFO][6686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.602 [INFO][6693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.602 [INFO][6693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.602 [INFO][6693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.616 [WARNING][6693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.617 [INFO][6693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.619 [INFO][6693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:27.628327 containerd[1924]: 2026-04-13 19:26:27.623 [INFO][6686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.628327 containerd[1924]: time="2026-04-13T19:26:27.627942178Z" level=info msg="TearDown network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\" successfully" Apr 13 19:26:27.628327 containerd[1924]: time="2026-04-13T19:26:27.628009858Z" level=info msg="StopPodSandbox for \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\" returns successfully" Apr 13 19:26:27.632478 containerd[1924]: time="2026-04-13T19:26:27.631087858Z" level=info msg="RemovePodSandbox for \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\"" Apr 13 19:26:27.632478 containerd[1924]: time="2026-04-13T19:26:27.631141534Z" level=info msg="Forcibly stopping sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\"" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.732 [WARNING][6707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"ff8ff070-0d4d-4815-8703-aa78cce64b54", ResourceVersion:"1375", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"f97c304110bb36bace49548398d4a93c643ad50f0ba012730e86024b8c0f8be6", Pod:"calico-apiserver-77999c4d5b-4q7t4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali285457d5f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.732 [INFO][6707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.732 [INFO][6707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" iface="eth0" netns="" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.732 [INFO][6707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.733 [INFO][6707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.802 [INFO][6715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.805 [INFO][6715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.805 [INFO][6715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.824 [WARNING][6715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.824 [INFO][6715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" HandleID="k8s-pod-network.e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4q7t4-eth0" Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.829 [INFO][6715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:27.839421 containerd[1924]: 2026-04-13 19:26:27.834 [INFO][6707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd" Apr 13 19:26:27.840424 containerd[1924]: time="2026-04-13T19:26:27.839490167Z" level=info msg="TearDown network for sandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\" successfully" Apr 13 19:26:27.852044 containerd[1924]: time="2026-04-13T19:26:27.851360267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:26:27.852044 containerd[1924]: time="2026-04-13T19:26:27.851529851Z" level=info msg="RemovePodSandbox \"e129563da72b6a26a576dc8798fcc8f28a2ea715695cdddb0aa7ffb10e1a45fd\" returns successfully" Apr 13 19:26:27.852313 containerd[1924]: time="2026-04-13T19:26:27.852278159Z" level=info msg="StopPodSandbox for \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\"" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:27.937 [WARNING][6729] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0", GenerateName:"calico-kube-controllers-6444d8f97b-", Namespace:"calico-system", SelfLink:"", UID:"fced93c3-4c99-4651-a35d-57e6eb8bc151", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6444d8f97b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994", Pod:"calico-kube-controllers-6444d8f97b-b7bhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45490a0a328", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:27.941 [INFO][6729] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:27.944 [INFO][6729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" iface="eth0" netns="" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:27.945 [INFO][6729] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:27.945 [INFO][6729] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:28.000 [INFO][6736] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:28.001 [INFO][6736] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:28.001 [INFO][6736] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:28.020 [WARNING][6736] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:28.020 [INFO][6736] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:28.023 [INFO][6736] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:28.038230 containerd[1924]: 2026-04-13 19:26:28.030 [INFO][6729] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.038230 containerd[1924]: time="2026-04-13T19:26:28.038028980Z" level=info msg="TearDown network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\" successfully" Apr 13 19:26:28.038230 containerd[1924]: time="2026-04-13T19:26:28.038067668Z" level=info msg="StopPodSandbox for \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\" returns successfully" Apr 13 19:26:28.040521 containerd[1924]: time="2026-04-13T19:26:28.039633308Z" level=info msg="RemovePodSandbox for \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\"" Apr 13 19:26:28.040521 containerd[1924]: time="2026-04-13T19:26:28.039695516Z" level=info msg="Forcibly stopping sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\"" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.134 [WARNING][6751] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0", GenerateName:"calico-kube-controllers-6444d8f97b-", Namespace:"calico-system", SelfLink:"", UID:"fced93c3-4c99-4651-a35d-57e6eb8bc151", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6444d8f97b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"065820c10700e4a11c62913e7c8f89dfa274dc92f30b719389bb055ca8c08994", Pod:"calico-kube-controllers-6444d8f97b-b7bhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45490a0a328", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.134 [INFO][6751] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.134 [INFO][6751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" iface="eth0" netns="" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.134 [INFO][6751] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.134 [INFO][6751] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.189 [INFO][6758] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.189 [INFO][6758] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.189 [INFO][6758] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.204 [WARNING][6758] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.204 [INFO][6758] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" HandleID="k8s-pod-network.9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6444d8f97b--b7bhq-eth0" Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.207 [INFO][6758] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:28.215135 containerd[1924]: 2026-04-13 19:26:28.211 [INFO][6751] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44" Apr 13 19:26:28.216093 containerd[1924]: time="2026-04-13T19:26:28.215133237Z" level=info msg="TearDown network for sandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\" successfully" Apr 13 19:26:28.223596 containerd[1924]: time="2026-04-13T19:26:28.223477353Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:26:28.223789 containerd[1924]: time="2026-04-13T19:26:28.223630737Z" level=info msg="RemovePodSandbox \"9126aa79f656e4d3791d17d8dbbd53c5620708dbc96951d44a8650c7bb679d44\" returns successfully" Apr 13 19:26:28.224373 containerd[1924]: time="2026-04-13T19:26:28.224306925Z" level=info msg="StopPodSandbox for \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\"" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.317 [WARNING][6772] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"145266ff-892c-4549-b337-19bfa44f9e42", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1", Pod:"coredns-66bc5c9577-c4w5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif04e5ffa2b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.317 [INFO][6772] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.317 [INFO][6772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" iface="eth0" netns="" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.317 [INFO][6772] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.317 [INFO][6772] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.390 [INFO][6779] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.390 [INFO][6779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.390 [INFO][6779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.411 [WARNING][6779] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.411 [INFO][6779] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.414 [INFO][6779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:28.421883 containerd[1924]: 2026-04-13 19:26:28.418 [INFO][6772] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.422976 containerd[1924]: time="2026-04-13T19:26:28.421910614Z" level=info msg="TearDown network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\" successfully" Apr 13 19:26:28.422976 containerd[1924]: time="2026-04-13T19:26:28.422064670Z" level=info msg="StopPodSandbox for \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\" returns successfully" Apr 13 19:26:28.423649 containerd[1924]: time="2026-04-13T19:26:28.423604810Z" level=info msg="RemovePodSandbox for \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\"" Apr 13 19:26:28.423834 containerd[1924]: time="2026-04-13T19:26:28.423660550Z" level=info msg="Forcibly stopping sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\"" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.538 [WARNING][6793] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"145266ff-892c-4549-b337-19bfa44f9e42", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"4f810b0f72d9f1c1d78b3917c5777e475be395bed5214d71f4a8b114c2cf0db1", Pod:"coredns-66bc5c9577-c4w5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif04e5ffa2b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.539 [INFO][6793] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.539 [INFO][6793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" iface="eth0" netns="" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.539 [INFO][6793] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.539 [INFO][6793] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.611 [INFO][6800] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.613 [INFO][6800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.613 [INFO][6800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.645 [WARNING][6800] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.645 [INFO][6800] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" HandleID="k8s-pod-network.c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--c4w5w-eth0" Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.649 [INFO][6800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:28.664145 containerd[1924]: 2026-04-13 19:26:28.655 [INFO][6793] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3" Apr 13 19:26:28.668355 containerd[1924]: time="2026-04-13T19:26:28.666693839Z" level=info msg="TearDown network for sandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\" successfully" Apr 13 19:26:28.676136 containerd[1924]: time="2026-04-13T19:26:28.676003715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:26:28.676560 containerd[1924]: time="2026-04-13T19:26:28.676409039Z" level=info msg="RemovePodSandbox \"c07d84d70386714484a4087cd6168d76d1bc84b76d3c2679ebd53d4eb0bcf6f3\" returns successfully" Apr 13 19:26:28.678662 containerd[1924]: time="2026-04-13T19:26:28.678136007Z" level=info msg="StopPodSandbox for \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\"" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.781 [WARNING][6814] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5bbd3846-92e6-469c-993d-c2ef707609bb", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1", Pod:"csi-node-driver-nzt8j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73636d73cfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.782 [INFO][6814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.782 [INFO][6814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" iface="eth0" netns="" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.782 [INFO][6814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.782 [INFO][6814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.845 [INFO][6821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.845 [INFO][6821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.845 [INFO][6821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.866 [WARNING][6821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.867 [INFO][6821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.869 [INFO][6821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:28.876521 containerd[1924]: 2026-04-13 19:26:28.872 [INFO][6814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:28.878505 containerd[1924]: time="2026-04-13T19:26:28.876925764Z" level=info msg="TearDown network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\" successfully" Apr 13 19:26:28.878505 containerd[1924]: time="2026-04-13T19:26:28.876970092Z" level=info msg="StopPodSandbox for \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\" returns successfully" Apr 13 19:26:28.879609 containerd[1924]: time="2026-04-13T19:26:28.879486912Z" level=info msg="RemovePodSandbox for \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\"" Apr 13 19:26:28.879943 containerd[1924]: time="2026-04-13T19:26:28.879640044Z" level=info msg="Forcibly stopping sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\"" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:28.973 [WARNING][6835] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5bbd3846-92e6-469c-993d-c2ef707609bb", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"875dcafa4036f7360c27a1e36f8354b9194f86db1cdb45461862041e75416ae1", Pod:"csi-node-driver-nzt8j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73636d73cfc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:28.974 [INFO][6835] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:28.974 [INFO][6835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" iface="eth0" netns="" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:28.974 [INFO][6835] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:28.974 [INFO][6835] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:29.025 [INFO][6842] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:29.025 [INFO][6842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:29.025 [INFO][6842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:29.055 [WARNING][6842] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:29.055 [INFO][6842] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" HandleID="k8s-pod-network.3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Workload="ip--172--31--19--12-k8s-csi--node--driver--nzt8j-eth0" Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:29.058 [INFO][6842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:29.066761 containerd[1924]: 2026-04-13 19:26:29.063 [INFO][6835] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7" Apr 13 19:26:29.067563 containerd[1924]: time="2026-04-13T19:26:29.066704805Z" level=info msg="TearDown network for sandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\" successfully" Apr 13 19:26:29.078173 containerd[1924]: time="2026-04-13T19:26:29.078085821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:26:29.078366 containerd[1924]: time="2026-04-13T19:26:29.078213189Z" level=info msg="RemovePodSandbox \"3411c45c32735c55e57ecb7bb3422d03023499682bff064645921d893fbe6be7\" returns successfully" Apr 13 19:26:29.079677 containerd[1924]: time="2026-04-13T19:26:29.079088517Z" level=info msg="StopPodSandbox for \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\"" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.158 [WARNING][6857] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"222d069d-fa76-4e04-b779-eb3d366b1a95", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127", Pod:"calico-apiserver-77999c4d5b-4qscg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5e28eac9c86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.159 [INFO][6857] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.159 [INFO][6857] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" iface="eth0" netns="" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.159 [INFO][6857] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.159 [INFO][6857] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.202 [INFO][6864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.203 [INFO][6864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.203 [INFO][6864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.219 [WARNING][6864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.219 [INFO][6864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.222 [INFO][6864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:29.230007 containerd[1924]: 2026-04-13 19:26:29.225 [INFO][6857] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.231198 containerd[1924]: time="2026-04-13T19:26:29.230099410Z" level=info msg="TearDown network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\" successfully" Apr 13 19:26:29.231198 containerd[1924]: time="2026-04-13T19:26:29.230138146Z" level=info msg="StopPodSandbox for \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\" returns successfully" Apr 13 19:26:29.232172 containerd[1924]: time="2026-04-13T19:26:29.231606418Z" level=info msg="RemovePodSandbox for \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\"" Apr 13 19:26:29.232172 containerd[1924]: time="2026-04-13T19:26:29.231663658Z" level=info msg="Forcibly stopping sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\"" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.345 [WARNING][6878] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0", GenerateName:"calico-apiserver-77999c4d5b-", Namespace:"calico-system", SelfLink:"", UID:"222d069d-fa76-4e04-b779-eb3d366b1a95", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77999c4d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"c4d7338427f026c6a9a1a00db802d812c193b39accad8ee18ec061c8bae94127", Pod:"calico-apiserver-77999c4d5b-4qscg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5e28eac9c86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.345 [INFO][6878] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.345 [INFO][6878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" iface="eth0" netns="" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.345 [INFO][6878] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.345 [INFO][6878] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.408 [INFO][6886] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.409 [INFO][6886] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.409 [INFO][6886] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.425 [WARNING][6886] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.425 [INFO][6886] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" HandleID="k8s-pod-network.cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Workload="ip--172--31--19--12-k8s-calico--apiserver--77999c4d5b--4qscg-eth0" Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.429 [INFO][6886] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:29.436080 containerd[1924]: 2026-04-13 19:26:29.432 [INFO][6878] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625" Apr 13 19:26:29.438314 containerd[1924]: time="2026-04-13T19:26:29.436283855Z" level=info msg="TearDown network for sandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\" successfully" Apr 13 19:26:29.445859 containerd[1924]: time="2026-04-13T19:26:29.445715255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:26:29.446479 containerd[1924]: time="2026-04-13T19:26:29.446095607Z" level=info msg="RemovePodSandbox \"cebf03dc8e6094f084c554e7d7b46b856d6ca96ce79eb730ef1f031f74f4b625\" returns successfully" Apr 13 19:26:29.447065 containerd[1924]: time="2026-04-13T19:26:29.447016871Z" level=info msg="StopPodSandbox for \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\"" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.523 [WARNING][6900] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d45a62d9-5e0c-4809-865b-4362b930842e", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3", Pod:"coredns-66bc5c9577-2478b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie210cc9f1ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.524 [INFO][6900] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.524 [INFO][6900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" iface="eth0" netns="" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.524 [INFO][6900] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.524 [INFO][6900] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.570 [INFO][6908] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.570 [INFO][6908] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.572 [INFO][6908] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.592 [WARNING][6908] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.592 [INFO][6908] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.595 [INFO][6908] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:29.604094 containerd[1924]: 2026-04-13 19:26:29.599 [INFO][6900] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.604094 containerd[1924]: time="2026-04-13T19:26:29.603860303Z" level=info msg="TearDown network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\" successfully" Apr 13 19:26:29.604094 containerd[1924]: time="2026-04-13T19:26:29.603901763Z" level=info msg="StopPodSandbox for \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\" returns successfully" Apr 13 19:26:29.606774 containerd[1924]: time="2026-04-13T19:26:29.605622887Z" level=info msg="RemovePodSandbox for \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\"" Apr 13 19:26:29.606774 containerd[1924]: time="2026-04-13T19:26:29.605712539Z" level=info msg="Forcibly stopping sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\"" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.689 [WARNING][6922] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d45a62d9-5e0c-4809-865b-4362b930842e", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"9bc57c113efc293b7358590d32d44dbadc17c874882a98bccb45ef9ac43c21f3", Pod:"coredns-66bc5c9577-2478b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie210cc9f1ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.690 [INFO][6922] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.690 [INFO][6922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" iface="eth0" netns="" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.690 [INFO][6922] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.690 [INFO][6922] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.734 [INFO][6929] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.734 [INFO][6929] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.734 [INFO][6929] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.749 [WARNING][6929] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.749 [INFO][6929] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" HandleID="k8s-pod-network.a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Workload="ip--172--31--19--12-k8s-coredns--66bc5c9577--2478b-eth0" Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.752 [INFO][6929] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:26:29.758691 containerd[1924]: 2026-04-13 19:26:29.755 [INFO][6922] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918" Apr 13 19:26:29.762025 containerd[1924]: time="2026-04-13T19:26:29.761621844Z" level=info msg="TearDown network for sandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\" successfully" Apr 13 19:26:29.769266 containerd[1924]: time="2026-04-13T19:26:29.769138020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:26:29.769433 containerd[1924]: time="2026-04-13T19:26:29.769303296Z" level=info msg="RemovePodSandbox \"a58be12ae38a55d47886a6a16e7ca8b280c9d83ae71a616a6265e8f447b73918\" returns successfully" Apr 13 19:26:30.022275 systemd[1]: Started sshd@20-172.31.19.12:22-4.175.71.9:35230.service - OpenSSH per-connection server daemon (4.175.71.9:35230). Apr 13 19:26:31.039316 sshd[6936]: Accepted publickey for core from 4.175.71.9 port 35230 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:31.042007 sshd[6936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:31.053128 systemd-logind[1907]: New session 21 of user core. Apr 13 19:26:31.061091 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:26:31.853935 sshd[6936]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:31.862038 systemd-logind[1907]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:26:31.862223 systemd[1]: sshd@20-172.31.19.12:22-4.175.71.9:35230.service: Deactivated successfully. Apr 13 19:26:31.867627 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:26:31.870663 systemd-logind[1907]: Removed session 21. Apr 13 19:26:37.021327 systemd[1]: Started sshd@21-172.31.19.12:22-4.175.71.9:57092.service - OpenSSH per-connection server daemon (4.175.71.9:57092). Apr 13 19:26:37.993850 sshd[6971]: Accepted publickey for core from 4.175.71.9 port 57092 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:37.997165 sshd[6971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:38.007270 systemd-logind[1907]: New session 22 of user core. Apr 13 19:26:38.013071 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:26:38.773916 sshd[6971]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:38.783099 systemd[1]: sshd@21-172.31.19.12:22-4.175.71.9:57092.service: Deactivated successfully. Apr 13 19:26:38.788889 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:26:38.791595 systemd-logind[1907]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:26:38.794626 systemd-logind[1907]: Removed session 22. Apr 13 19:26:40.814191 systemd[1]: run-containerd-runc-k8s.io-5b4abfcad99ba0d10d100742b9b309f953b6ea327f43e0c46b71b27eea98c926-runc.edZZbx.mount: Deactivated successfully.