Apr 13 19:23:15.224374 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 13 19:23:15.224975 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:23:15.225459 kernel: KASLR disabled due to lack of seed Apr 13 19:23:15.225482 kernel: efi: EFI v2.7 by EDK II Apr 13 19:23:15.225810 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 13 19:23:15.225966 kernel: ACPI: Early table checksum verification disabled Apr 13 19:23:15.226192 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 13 19:23:15.226456 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 19:23:15.226684 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 19:23:15.226843 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 19:23:15.226869 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 19:23:15.226886 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 13 19:23:15.226902 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 13 19:23:15.226918 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 13 19:23:15.226938 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 19:23:15.226958 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 13 19:23:15.226976 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 13 19:23:15.226993 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 13 19:23:15.227010 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 13 19:23:15.227027 kernel: printk: bootconsole [uart0] enabled Apr 13 19:23:15.227043 kernel: NUMA: Failed to initialise from firmware Apr 13 19:23:15.227060 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:15.227077 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 13 19:23:15.227094 kernel: Zone ranges: Apr 13 19:23:15.227111 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:23:15.227127 kernel: DMA32 empty Apr 13 19:23:15.227147 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 13 19:23:15.227164 kernel: Movable zone start for each node Apr 13 19:23:15.227181 kernel: Early memory node ranges Apr 13 19:23:15.227198 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 13 19:23:15.227214 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 13 19:23:15.227231 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 13 19:23:15.227247 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 13 19:23:15.227264 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 13 19:23:15.227280 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 13 19:23:15.227297 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 13 19:23:15.227313 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 13 19:23:15.227330 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:15.227350 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 13 19:23:15.227368 kernel: psci: probing for conduit method from ACPI. Apr 13 19:23:15.227391 kernel: psci: PSCIv1.0 detected in firmware. Apr 13 19:23:15.227429 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:23:15.227447 kernel: psci: Trusted OS migration not required Apr 13 19:23:15.227471 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:23:15.227489 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 13 19:23:15.227507 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:23:15.227525 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:23:15.227542 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:23:15.227560 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:23:15.227578 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:23:15.227595 kernel: CPU features: detected: Spectre-v2 Apr 13 19:23:15.227613 kernel: CPU features: detected: Spectre-v3a Apr 13 19:23:15.227630 kernel: CPU features: detected: Spectre-BHB Apr 13 19:23:15.227647 kernel: CPU features: detected: ARM erratum 1742098 Apr 13 19:23:15.227669 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 13 19:23:15.227687 kernel: alternatives: applying boot alternatives Apr 13 19:23:15.227707 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:15.227726 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:23:15.227743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:23:15.227761 kernel: Fallback order for Node 0: 0 Apr 13 19:23:15.227779 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 13 19:23:15.227796 kernel: Policy zone: Normal Apr 13 19:23:15.227813 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:23:15.227831 kernel: software IO TLB: area num 2. Apr 13 19:23:15.227849 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 13 19:23:15.227872 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 13 19:23:15.227890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:23:15.227907 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:23:15.227925 kernel: rcu: RCU event tracing is enabled. Apr 13 19:23:15.227944 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:23:15.227962 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:23:15.227979 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:23:15.227997 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:23:15.228015 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:23:15.228032 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:23:15.228050 kernel: GICv3: 96 SPIs implemented Apr 13 19:23:15.228071 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:23:15.228089 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:23:15.228106 kernel: GICv3: GICv3 features: 16 PPIs Apr 13 19:23:15.228124 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 13 19:23:15.228142 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 13 19:23:15.228159 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:23:15.228177 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:23:15.228195 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 13 19:23:15.228213 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 13 19:23:15.228230 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 13 19:23:15.228248 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:23:15.228265 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 13 19:23:15.228288 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 13 19:23:15.228306 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 13 19:23:15.228324 kernel: Console: colour dummy device 80x25 Apr 13 19:23:15.228342 kernel: printk: console [tty1] enabled Apr 13 19:23:15.228360 kernel: ACPI: Core revision 20230628 Apr 13 19:23:15.228378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 13 19:23:15.228411 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:23:15.228434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:23:15.228453 kernel: landlock: Up and running. Apr 13 19:23:15.228476 kernel: SELinux: Initializing. Apr 13 19:23:15.228495 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:15.228513 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:15.228531 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:15.228549 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:15.228567 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:23:15.228585 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:23:15.228603 kernel: Platform MSI: ITS@0x10080000 domain created Apr 13 19:23:15.228621 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 13 19:23:15.228643 kernel: Remapping and enabling EFI services. Apr 13 19:23:15.228661 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:23:15.228679 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:23:15.228697 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 13 19:23:15.228715 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 13 19:23:15.228733 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 13 19:23:15.228751 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:23:15.228769 kernel: SMP: Total of 2 processors activated. Apr 13 19:23:15.228786 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:23:15.228808 kernel: CPU features: detected: 32-bit EL1 Support Apr 13 19:23:15.228827 kernel: CPU features: detected: CRC32 instructions Apr 13 19:23:15.228845 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:23:15.228873 kernel: alternatives: applying system-wide alternatives Apr 13 19:23:15.228896 kernel: devtmpfs: initialized Apr 13 19:23:15.228915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:23:15.228934 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:23:15.228952 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:23:15.228971 kernel: SMBIOS 3.0.0 present. Apr 13 19:23:15.228994 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 13 19:23:15.229013 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:23:15.229032 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:23:15.229050 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:23:15.229069 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:23:15.229088 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:23:15.229107 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Apr 13 19:23:15.229125 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:23:15.229189 kernel: cpuidle: using governor menu Apr 13 19:23:15.229246 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:23:15.229301 kernel: ASID allocator initialised with 65536 entries Apr 13 19:23:15.229375 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:23:15.229439 kernel: Serial: AMBA PL011 UART driver Apr 13 19:23:15.229460 kernel: Modules: 17488 pages in range for non-PLT usage Apr 13 19:23:15.229479 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:23:15.229498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:23:15.229517 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:23:15.229543 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:23:15.229563 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:23:15.229582 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:23:15.229601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:23:15.229620 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:23:15.229639 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:23:15.229657 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:23:15.229676 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:23:15.229696 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:23:15.229718 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:23:15.229738 kernel: ACPI: Interpreter enabled Apr 13 19:23:15.229757 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:23:15.229775 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:23:15.229794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 13 19:23:15.230117 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:23:15.230354 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:23:15.230633 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:23:15.230852 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 13 19:23:15.231069 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 13 19:23:15.231096 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 13 19:23:15.231115 kernel: acpiphp: Slot [1] registered Apr 13 19:23:15.231135 kernel: acpiphp: Slot [2] registered Apr 13 19:23:15.231154 kernel: acpiphp: Slot [3] registered Apr 13 19:23:15.231172 kernel: acpiphp: Slot [4] registered Apr 13 19:23:15.231191 kernel: acpiphp: Slot [5] registered Apr 13 19:23:15.231216 kernel: acpiphp: Slot [6] registered Apr 13 19:23:15.231235 kernel: acpiphp: Slot [7] registered Apr 13 19:23:15.231253 kernel: acpiphp: Slot [8] registered Apr 13 19:23:15.231272 kernel: acpiphp: Slot [9] registered Apr 13 19:23:15.231291 kernel: acpiphp: Slot [10] registered Apr 13 19:23:15.231309 kernel: acpiphp: Slot [11] registered Apr 13 19:23:15.231328 kernel: acpiphp: Slot [12] registered Apr 13 19:23:15.231347 kernel: acpiphp: Slot [13] registered Apr 13 19:23:15.231366 kernel: acpiphp: Slot [14] registered Apr 13 19:23:15.231384 kernel: acpiphp: Slot [15] registered Apr 13 19:23:15.231463 kernel: acpiphp: Slot [16] registered Apr 13 19:23:15.231488 kernel: acpiphp: Slot [17] registered Apr 13 19:23:15.231508 kernel: acpiphp: Slot [18] registered Apr 13 19:23:15.231526 kernel: acpiphp: Slot [19] registered Apr 13 19:23:15.231549 kernel: acpiphp: Slot [20] registered Apr 13 19:23:15.231568 kernel: acpiphp: Slot [21] registered Apr 13 19:23:15.231586 kernel: acpiphp: Slot [22] registered Apr 13 19:23:15.231605 kernel: acpiphp: Slot [23] registered Apr 13 19:23:15.231624 kernel: acpiphp: Slot [24] registered Apr 13 19:23:15.231649 kernel: acpiphp: Slot [25] registered Apr 13 19:23:15.231668 kernel: acpiphp: Slot [26] registered Apr 13 19:23:15.231687 kernel: acpiphp: Slot [27] registered Apr 13 19:23:15.231706 kernel: acpiphp: Slot [28] registered Apr 13 19:23:15.231725 kernel: acpiphp: Slot [29] registered Apr 13 19:23:15.231743 kernel: acpiphp: Slot [30] registered Apr 13 19:23:15.231761 kernel: acpiphp: Slot [31] registered Apr 13 19:23:15.231780 kernel: PCI host bridge to bus 0000:00 Apr 13 19:23:15.232013 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 13 19:23:15.232216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:23:15.232434 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:15.232648 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 13 19:23:15.232891 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 13 19:23:15.233134 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 13 19:23:15.233377 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 13 19:23:15.233702 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 19:23:15.233919 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 13 19:23:15.234133 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:15.234366 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 19:23:15.234604 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 13 19:23:15.234817 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 13 19:23:15.235033 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 13 19:23:15.235321 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:15.235547 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 13 19:23:15.235741 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:23:15.235933 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:15.235959 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:23:15.235978 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:23:15.235998 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:23:15.236016 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:23:15.236042 kernel: iommu: Default domain type: Translated Apr 13 19:23:15.236062 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:23:15.236080 kernel: efivars: Registered efivars operations Apr 13 19:23:15.236099 kernel: vgaarb: loaded Apr 13 19:23:15.236117 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:23:15.236135 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:23:15.236154 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:23:15.236172 kernel: pnp: PnP ACPI init Apr 13 19:23:15.236387 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 13 19:23:15.236439 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:23:15.236459 kernel: NET: Registered PF_INET protocol family Apr 13 19:23:15.236478 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:23:15.236497 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:23:15.236517 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:23:15.236564 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:23:15.236604 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:23:15.236628 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:23:15.236653 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:15.236673 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:15.236692 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:23:15.236711 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:23:15.236730 kernel: kvm [1]: HYP mode not available Apr 13 19:23:15.236749 kernel: Initialise system trusted keyrings Apr 13 19:23:15.236767 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:23:15.236786 kernel: Key type asymmetric registered Apr 13 19:23:15.236805 kernel: Asymmetric key parser 'x509' registered Apr 13 19:23:15.236829 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:23:15.236848 kernel: io scheduler mq-deadline registered Apr 13 19:23:15.236867 kernel: io scheduler kyber registered Apr 13 19:23:15.236886 kernel: io scheduler bfq registered Apr 13 19:23:15.237130 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 13 19:23:15.237176 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:23:15.237197 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:23:15.237217 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 13 19:23:15.237235 kernel: ACPI: button: Sleep Button [SLPB] Apr 13 19:23:15.237261 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:23:15.237282 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:23:15.237537 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 13 19:23:15.237566 kernel: printk: console [ttyS0] disabled Apr 13 19:23:15.237586 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 13 19:23:15.237605 kernel: printk: console [ttyS0] enabled Apr 13 19:23:15.237624 kernel: printk: bootconsole [uart0] disabled Apr 13 19:23:15.237642 kernel: thunder_xcv, ver 1.0 Apr 13 19:23:15.237661 kernel: thunder_bgx, ver 1.0 Apr 13 19:23:15.237686 kernel: nicpf, ver 1.0 Apr 13 19:23:15.237705 kernel: nicvf, ver 1.0 Apr 13 19:23:15.237937 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:23:15.238142 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:23:14 UTC (1776108194) Apr 13 19:23:15.238168 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:23:15.238188 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 13 19:23:15.238207 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:23:15.238226 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:23:15.238250 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:23:15.238269 kernel: Segment Routing with IPv6 Apr 13 19:23:15.238288 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:23:15.238306 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:23:15.238325 kernel: Key type dns_resolver registered Apr 13 19:23:15.238344 kernel: registered taskstats version 1 Apr 13 19:23:15.238362 kernel: Loading compiled-in X.509 certificates Apr 13 19:23:15.238381 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:23:15.238447 kernel: Key type .fscrypt registered Apr 13 19:23:15.238475 kernel: Key type fscrypt-provisioning registered Apr 13 19:23:15.238495 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:23:15.238514 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:23:15.238616 kernel: ima: No architecture policies found Apr 13 19:23:15.238846 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:23:15.239109 kernel: clk: Disabling unused clocks Apr 13 19:23:15.239233 kernel: Freeing unused kernel memory: 39424K Apr 13 19:23:15.239614 kernel: Run /init as init process Apr 13 19:23:15.239840 kernel: with arguments: Apr 13 19:23:15.240108 kernel: /init Apr 13 19:23:15.240335 kernel: with environment: Apr 13 19:23:15.240608 kernel: HOME=/ Apr 13 19:23:15.240965 kernel: TERM=linux Apr 13 19:23:15.241098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:15.241337 systemd[1]: Detected virtualization amazon. Apr 13 19:23:15.241608 systemd[1]: Detected architecture arm64. Apr 13 19:23:15.241763 systemd[1]: Running in initrd. Apr 13 19:23:15.241793 systemd[1]: No hostname configured, using default hostname. Apr 13 19:23:15.241813 systemd[1]: Hostname set to . Apr 13 19:23:15.241834 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:15.241854 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:23:15.241874 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:15.241894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:15.241916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:23:15.241936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:15.241961 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:23:15.241982 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:23:15.242005 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:23:15.242026 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:23:15.242047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:15.242068 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:15.242092 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:15.242113 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:15.242133 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:15.242153 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:15.242174 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:15.242194 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:15.242215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:23:15.242235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:23:15.242256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:15.242281 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:15.242302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:15.242322 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:15.242342 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:23:15.242363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:15.242383 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:23:15.242421 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:23:15.242445 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:15.242465 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:15.242492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:15.242513 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:15.242573 systemd-journald[251]: Collecting audit messages is disabled. Apr 13 19:23:15.242619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:15.242644 systemd-journald[251]: Journal started Apr 13 19:23:15.242681 systemd-journald[251]: Runtime Journal (/run/log/journal/ec221e111e2fcbee02668f574f5f6873) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:15.231512 systemd-modules-load[252]: Inserted module 'overlay' Apr 13 19:23:15.252468 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:15.255162 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:23:15.273630 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:23:15.273938 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:23:15.285500 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 13 19:23:15.288212 kernel: Bridge firewalling registered Apr 13 19:23:15.288865 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:15.297538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:15.314084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:15.325551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:15.347679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:15.360820 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:23:15.370539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:15.391854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:15.402012 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:15.420623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:15.429454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:15.439483 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:15.452209 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:23:15.484292 dracut-cmdline[292]: dracut-dracut-053 Apr 13 19:23:15.495836 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:15.525943 systemd-resolved[285]: Positive Trust Anchors: Apr 13 19:23:15.525977 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:15.526040 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:15.651440 kernel: SCSI subsystem initialized Apr 13 19:23:15.660421 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:23:15.672455 kernel: iscsi: registered transport (tcp) Apr 13 19:23:15.694782 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:23:15.694852 kernel: QLogic iSCSI HBA Driver Apr 13 19:23:15.764861 kernel: random: crng init done Apr 13 19:23:15.765231 systemd-resolved[285]: Defaulting to hostname 'linux'. Apr 13 19:23:15.767374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:15.772452 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:15.803450 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:15.817775 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:23:15.853907 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:23:15.853980 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:23:15.855898 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:23:15.922454 kernel: raid6: neonx8 gen() 6728 MB/s Apr 13 19:23:15.939429 kernel: raid6: neonx4 gen() 6600 MB/s Apr 13 19:23:15.956429 kernel: raid6: neonx2 gen() 5485 MB/s Apr 13 19:23:15.973429 kernel: raid6: neonx1 gen() 3968 MB/s Apr 13 19:23:15.990429 kernel: raid6: int64x8 gen() 3826 MB/s Apr 13 19:23:16.007429 kernel: raid6: int64x4 gen() 3729 MB/s Apr 13 19:23:16.024429 kernel: raid6: int64x2 gen() 3615 MB/s Apr 13 19:23:16.042484 kernel: raid6: int64x1 gen() 2764 MB/s Apr 13 19:23:16.042521 kernel: raid6: using algorithm neonx8 gen() 6728 MB/s Apr 13 19:23:16.061468 kernel: raid6: .... xor() 4776 MB/s, rmw enabled Apr 13 19:23:16.061504 kernel: raid6: using neon recovery algorithm Apr 13 19:23:16.069433 kernel: xor: measuring software checksum speed Apr 13 19:23:16.069494 kernel: 8regs : 9930 MB/sec Apr 13 19:23:16.072924 kernel: 32regs : 11016 MB/sec Apr 13 19:23:16.072957 kernel: arm64_neon : 9558 MB/sec Apr 13 19:23:16.072982 kernel: xor: using function: 32regs (11016 MB/sec) Apr 13 19:23:16.157712 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:23:16.176434 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:16.187726 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:16.232215 systemd-udevd[474]: Using default interface naming scheme 'v255'. Apr 13 19:23:16.241374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:16.266859 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:23:16.294221 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Apr 13 19:23:16.352208 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:16.363743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:16.477567 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:16.501083 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:23:16.547374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:16.559339 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:16.564478 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:16.575380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:16.594857 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:23:16.628641 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:16.683875 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:23:16.683937 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 13 19:23:16.693823 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 19:23:16.694179 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 19:23:16.705436 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:ef:7e:17:fd:69 Apr 13 19:23:16.705788 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:23:16.705816 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 19:23:16.711183 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:16.711934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:16.719671 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 19:23:16.723284 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:16.742033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:23:16.742070 kernel: GPT:9289727 != 33554431 Apr 13 19:23:16.742097 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:23:16.742123 kernel: GPT:9289727 != 33554431 Apr 13 19:23:16.742153 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:23:16.742179 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:16.726407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:16.726518 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:16.737383 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:16.757228 (udev-worker)[537]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:16.760501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:16.805347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:16.825034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:16.863447 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (533) Apr 13 19:23:16.895074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:16.904361 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (526) Apr 13 19:23:16.973472 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 19:23:16.998079 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 19:23:17.021152 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:17.024089 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:17.051264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:17.067662 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:23:17.083747 disk-uuid[665]: Primary Header is updated. Apr 13 19:23:17.083747 disk-uuid[665]: Secondary Entries is updated. Apr 13 19:23:17.083747 disk-uuid[665]: Secondary Header is updated. Apr 13 19:23:17.092459 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:17.102490 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:17.112436 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:18.117054 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:18.117145 disk-uuid[666]: The operation has completed successfully. Apr 13 19:23:18.305179 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:23:18.308029 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:23:18.369712 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:23:18.382023 sh[1009]: Success Apr 13 19:23:18.409446 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:23:18.506764 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:23:18.516458 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:23:18.529768 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:23:18.567896 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:23:18.567958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:18.570438 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:23:18.570474 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:23:18.572525 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:23:18.597432 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:23:18.601934 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:23:18.602460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:23:18.616786 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:23:18.619708 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:23:18.663634 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:18.663709 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:18.665409 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:18.684436 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:18.704232 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:23:18.707586 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:18.718359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:23:18.730156 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:23:18.815791 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:18.835828 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:18.890888 systemd-networkd[1202]: lo: Link UP Apr 13 19:23:18.890905 systemd-networkd[1202]: lo: Gained carrier Apr 13 19:23:18.895301 systemd-networkd[1202]: Enumeration completed Apr 13 19:23:18.895477 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:18.896807 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:18.896814 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:18.903212 systemd[1]: Reached target network.target - Network. Apr 13 19:23:18.924676 systemd-networkd[1202]: eth0: Link UP Apr 13 19:23:18.924690 systemd-networkd[1202]: eth0: Gained carrier Apr 13 19:23:18.924709 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:18.955551 systemd-networkd[1202]: eth0: DHCPv4 address 172.31.26.195/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:19.001198 ignition[1136]: Ignition 2.19.0 Apr 13 19:23:19.001735 ignition[1136]: Stage: fetch-offline Apr 13 19:23:19.003317 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:19.003342 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:19.004189 ignition[1136]: Ignition finished successfully Apr 13 19:23:19.016482 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:19.027171 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:23:19.052583 ignition[1211]: Ignition 2.19.0 Apr 13 19:23:19.052603 ignition[1211]: Stage: fetch Apr 13 19:23:19.053253 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:19.053279 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:19.053475 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:19.069279 ignition[1211]: PUT result: OK Apr 13 19:23:19.072284 ignition[1211]: parsed url from cmdline: "" Apr 13 19:23:19.072299 ignition[1211]: no config URL provided Apr 13 19:23:19.072319 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:23:19.072345 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:23:19.072375 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:19.074848 ignition[1211]: PUT result: OK Apr 13 19:23:19.074922 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 19:23:19.087484 ignition[1211]: GET result: OK Apr 13 19:23:19.087839 ignition[1211]: parsing config with SHA512: c2e9bb340c7c40bb6f8b2c7c32733caa4130bf71fea5e9ddf6c5defc85884600edb4d37d6e16e5c6a29b434b8d1fa8c1c92d5543292423024f84fc0542b636ba Apr 13 19:23:19.096193 unknown[1211]: fetched base config from "system" Apr 13 19:23:19.096459 unknown[1211]: fetched base config from "system" Apr 13 19:23:19.097203 ignition[1211]: fetch: fetch complete Apr 13 19:23:19.096474 unknown[1211]: fetched user config from "aws" Apr 13 19:23:19.097214 ignition[1211]: fetch: fetch passed Apr 13 19:23:19.097324 ignition[1211]: Ignition finished successfully Apr 13 19:23:19.111482 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:23:19.120887 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:23:19.149063 ignition[1217]: Ignition 2.19.0 Apr 13 19:23:19.149083 ignition[1217]: Stage: kargs Apr 13 19:23:19.150250 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:19.150276 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:19.150476 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:19.161385 ignition[1217]: PUT result: OK Apr 13 19:23:19.166543 ignition[1217]: kargs: kargs passed Apr 13 19:23:19.166702 ignition[1217]: Ignition finished successfully Apr 13 19:23:19.172089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:23:19.185840 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:23:19.213649 ignition[1223]: Ignition 2.19.0 Apr 13 19:23:19.213669 ignition[1223]: Stage: disks Apr 13 19:23:19.214263 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:19.214287 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:19.214470 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:19.226021 ignition[1223]: PUT result: OK Apr 13 19:23:19.231861 ignition[1223]: disks: disks passed Apr 13 19:23:19.231957 ignition[1223]: Ignition finished successfully Apr 13 19:23:19.233777 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:23:19.238183 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:19.241277 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:23:19.244557 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:19.249037 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:19.251675 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:19.268989 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:23:19.317294 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 19:23:19.321674 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:23:19.337845 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:23:19.420469 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:23:19.420643 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:23:19.425849 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:19.445557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:19.452167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:23:19.459648 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 19:23:19.459732 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:23:19.477533 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1250) Apr 13 19:23:19.477571 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:19.459781 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:19.483392 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:19.485832 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:19.500853 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:23:19.511728 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:19.516789 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:23:19.526222 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:19.634533 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:23:19.645201 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:23:19.654118 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:23:19.662610 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:23:19.820970 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:19.831723 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:23:19.839192 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:23:19.856494 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:23:19.865336 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:19.906505 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:23:19.916589 ignition[1365]: INFO : Ignition 2.19.0 Apr 13 19:23:19.916589 ignition[1365]: INFO : Stage: mount Apr 13 19:23:19.916589 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:19.916589 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:19.916589 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:19.930660 ignition[1365]: INFO : PUT result: OK Apr 13 19:23:19.936027 ignition[1365]: INFO : mount: mount passed Apr 13 19:23:19.937968 ignition[1365]: INFO : Ignition finished successfully Apr 13 19:23:19.943230 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:23:19.961216 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:23:19.978205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:20.018434 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1376) Apr 13 19:23:20.022429 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:20.022474 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:20.022501 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:20.028430 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:20.032717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:20.066447 ignition[1393]: INFO : Ignition 2.19.0 Apr 13 19:23:20.066447 ignition[1393]: INFO : Stage: files Apr 13 19:23:20.072130 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:20.072130 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:20.072130 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:20.072130 ignition[1393]: INFO : PUT result: OK Apr 13 19:23:20.083783 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:23:20.087089 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:23:20.087089 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:23:20.094725 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:23:20.094725 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:23:20.094725 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:23:20.094725 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:23:20.091249 unknown[1393]: wrote ssh authorized keys file for user: core Apr 13 19:23:20.113386 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:23:20.113386 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:20.113386 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:23:20.176650 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:23:20.209615 systemd-networkd[1202]: eth0: Gained IPv6LL Apr 13 19:23:20.458231 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:20.458231 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:20.458231 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:20.458231 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:20.482071 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:23:20.977774 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 19:23:21.380754 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:21.380754 ignition[1393]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:21.389147 ignition[1393]: INFO : files: files passed Apr 13 19:23:21.389147 ignition[1393]: INFO : Ignition finished successfully Apr 13 19:23:21.443285 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:23:21.455807 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:23:21.467619 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:23:21.476245 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:23:21.481324 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:23:21.505960 initrd-setup-root-after-ignition[1421]: grep: Apr 13 19:23:21.505960 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:21.512440 initrd-setup-root-after-ignition[1421]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:21.512440 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:21.524487 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:21.531543 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:23:21.547354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:23:21.603436 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:23:21.603819 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:23:21.612274 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:23:21.615081 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:23:21.618018 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:23:21.631734 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:23:21.665496 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:21.680683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:23:21.705604 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:21.711537 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:21.714910 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:23:21.717443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:23:21.717677 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:21.721313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:23:21.724114 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:23:21.726802 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:23:21.732768 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:21.736021 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:21.739190 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:23:21.742109 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:21.745585 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:23:21.748518 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:23:21.751362 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:23:21.753672 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:23:21.753899 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:21.757141 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:21.760214 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:21.763532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:23:21.763828 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:21.769435 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:23:21.769661 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:21.772912 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:23:21.773229 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:21.776787 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:23:21.777062 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:23:21.811243 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:23:21.815031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:23:21.815468 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:21.869770 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:23:21.876337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:23:21.877157 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:21.887285 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:23:21.890871 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:21.906558 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:23:21.910651 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:23:21.925716 ignition[1445]: INFO : Ignition 2.19.0 Apr 13 19:23:21.928359 ignition[1445]: INFO : Stage: umount Apr 13 19:23:21.931917 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:21.931917 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:21.931917 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:21.935577 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:23:21.943244 ignition[1445]: INFO : PUT result: OK Apr 13 19:23:21.950112 ignition[1445]: INFO : umount: umount passed Apr 13 19:23:21.952095 ignition[1445]: INFO : Ignition finished successfully Apr 13 19:23:21.957611 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:23:21.957919 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:23:21.967222 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:23:21.967333 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:23:21.971584 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:23:21.971681 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:23:21.974566 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:23:21.975177 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:23:21.979770 systemd[1]: Stopped target network.target - Network. Apr 13 19:23:21.981889 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:23:21.981986 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:21.984224 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:23:21.984551 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:23:22.011156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:22.014531 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:23:22.024984 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:23:22.029628 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:23:22.029716 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:22.032273 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:23:22.032345 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:22.035056 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:23:22.035150 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:23:22.037829 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:23:22.037927 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:22.040679 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:23:22.045659 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:23:22.078816 systemd-networkd[1202]: eth0: DHCPv6 lease lost Apr 13 19:23:22.082334 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:23:22.082632 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:23:22.089941 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:23:22.090037 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:22.112658 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:23:22.118299 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:23:22.120927 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:22.128238 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:22.131853 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:23:22.134587 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:23:22.154508 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:23:22.154978 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:23:22.162892 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:23:22.164648 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:22.183522 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:23:22.183634 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:22.187195 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:23:22.187272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:22.191177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:23:22.191270 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:22.196139 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:23:22.196227 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:22.199417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:22.199503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:22.203161 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:23:22.203632 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:22.238804 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:23:22.241535 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:23:22.241651 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:22.245011 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:23:22.245110 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:22.261618 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:23:22.261714 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:22.265022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:23:22.265115 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:22.268457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:22.268535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:22.294632 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:23:22.295065 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:23:22.309189 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:23:22.309440 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:23:22.316463 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:23:22.336786 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:23:22.355715 systemd[1]: Switching root. Apr 13 19:23:22.393549 systemd-journald[251]: Journal stopped Apr 13 19:23:24.250209 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 13 19:23:24.250362 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:23:24.252489 kernel: SELinux: policy capability open_perms=1 Apr 13 19:23:24.252539 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:23:24.252571 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:23:24.252602 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:23:24.252633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:23:24.252663 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:23:24.252701 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:23:24.252732 kernel: audit: type=1403 audit(1776108202.708:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:23:24.252781 systemd[1]: Successfully loaded SELinux policy in 51.059ms. Apr 13 19:23:24.252836 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.736ms. Apr 13 19:23:24.252872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:24.252905 systemd[1]: Detected virtualization amazon. Apr 13 19:23:24.252938 systemd[1]: Detected architecture arm64. Apr 13 19:23:24.252969 systemd[1]: Detected first boot. Apr 13 19:23:24.253003 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:24.253035 zram_generator::config[1505]: No configuration found. Apr 13 19:23:24.253098 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:23:24.253134 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:23:24.253169 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 19:23:24.253203 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:23:24.253236 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:23:24.253268 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:23:24.253300 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:23:24.253331 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:23:24.253365 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:23:24.253882 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:23:24.253930 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:23:24.253968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:24.254000 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:24.254030 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:23:24.254060 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:23:24.254095 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:23:24.254137 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:24.254175 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 19:23:24.254205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:24.254237 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:23:24.254269 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:24.254301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:24.254331 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:24.254362 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:24.254392 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:23:24.254540 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:23:24.254572 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:23:24.254603 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:23:24.254633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:24.254663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:24.254696 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:24.254728 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:23:24.254758 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:23:24.254791 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:23:24.254824 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:23:24.254859 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:23:24.254892 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:23:24.255066 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:23:24.255515 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:23:24.255993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:24.256290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:24.256572 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:23:24.257000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:24.257511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:24.257807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:24.257842 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:23:24.257876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:24.257907 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:23:24.257940 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 19:23:24.257973 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 19:23:24.258003 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:24.258036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:24.258071 kernel: fuse: init (API version 7.39) Apr 13 19:23:24.258101 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:23:24.258133 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:23:24.258167 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:24.258197 kernel: loop: module loaded Apr 13 19:23:24.258282 systemd-journald[1609]: Collecting audit messages is disabled. Apr 13 19:23:24.258334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:23:24.258372 systemd-journald[1609]: Journal started Apr 13 19:23:24.258444 systemd-journald[1609]: Runtime Journal (/run/log/journal/ec221e111e2fcbee02668f574f5f6873) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:24.270422 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:23:24.280510 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:24.280971 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:23:24.286205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:23:24.291985 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:23:24.300113 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:23:24.310003 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:23:24.319140 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:24.326608 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:23:24.326975 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:23:24.335046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:24.335440 kernel: ACPI: bus type drm_connector registered Apr 13 19:23:24.335469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:24.342075 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:24.342472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:24.350174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:24.350580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:24.357385 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:23:24.357764 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:23:24.366722 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:24.367177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:24.373432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:24.380133 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:23:24.390181 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:23:24.418643 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:23:24.432605 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:23:24.446589 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:23:24.452203 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:23:24.463872 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:23:24.479847 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:23:24.486252 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:24.504677 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:23:24.509661 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:24.528557 systemd-journald[1609]: Time spent on flushing to /var/log/journal/ec221e111e2fcbee02668f574f5f6873 is 98.003ms for 884 entries. Apr 13 19:23:24.528557 systemd-journald[1609]: System Journal (/var/log/journal/ec221e111e2fcbee02668f574f5f6873) is 8.0M, max 195.6M, 187.6M free. Apr 13 19:23:24.654068 systemd-journald[1609]: Received client request to flush runtime journal. Apr 13 19:23:24.518722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:24.531679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:23:24.548168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:24.562786 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:23:24.574957 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:23:24.579185 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:23:24.587618 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:23:24.600875 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:23:24.670440 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:23:24.676610 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:24.681487 udevadm[1665]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 19:23:24.688942 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Apr 13 19:23:24.688984 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Apr 13 19:23:24.698205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:23:24.717843 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:23:24.786285 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:23:24.803931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:24.847700 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Apr 13 19:23:24.847740 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Apr 13 19:23:24.858082 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:25.448295 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:23:25.459678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:25.519997 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Apr 13 19:23:25.562234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:25.590727 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:25.624859 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:23:25.705761 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 13 19:23:25.809644 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:23:25.836224 (udev-worker)[1703]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:25.987542 systemd-networkd[1694]: lo: Link UP Apr 13 19:23:25.987566 systemd-networkd[1694]: lo: Gained carrier Apr 13 19:23:25.991586 systemd-networkd[1694]: Enumeration completed Apr 13 19:23:25.991782 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:26.000252 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:26.000278 systemd-networkd[1694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:26.002971 systemd-networkd[1694]: eth0: Link UP Apr 13 19:23:26.003269 systemd-networkd[1694]: eth0: Gained carrier Apr 13 19:23:26.003321 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:26.021437 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1707) Apr 13 19:23:26.028958 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:23:26.055977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:26.064520 systemd-networkd[1694]: eth0: DHCPv4 address 172.31.26.195/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:26.253573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:26.258953 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:23:26.275991 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:23:26.280103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:26.304457 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:26.347173 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:23:26.350808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:26.359688 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:23:26.383243 lvm[1818]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:26.423046 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:23:26.428665 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:23:26.432003 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:23:26.432056 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:26.434850 systemd[1]: Reached target machines.target - Containers. Apr 13 19:23:26.439457 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:23:26.448713 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:23:26.457699 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:23:26.464285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:26.470637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:23:26.485680 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:23:26.506508 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:23:26.514641 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:23:26.549651 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:23:26.562732 kernel: loop0: detected capacity change from 0 to 114328 Apr 13 19:23:26.569156 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:23:26.570877 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:23:26.607434 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:23:26.652455 kernel: loop1: detected capacity change from 0 to 114432 Apr 13 19:23:26.695469 kernel: loop2: detected capacity change from 0 to 209336 Apr 13 19:23:26.757481 kernel: loop3: detected capacity change from 0 to 52536 Apr 13 19:23:26.809444 kernel: loop4: detected capacity change from 0 to 114328 Apr 13 19:23:26.844460 kernel: loop5: detected capacity change from 0 to 114432 Apr 13 19:23:26.871457 kernel: loop6: detected capacity change from 0 to 209336 Apr 13 19:23:26.911115 kernel: loop7: detected capacity change from 0 to 52536 Apr 13 19:23:26.933200 (sd-merge)[1839]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 19:23:26.934321 (sd-merge)[1839]: Merged extensions into '/usr'. Apr 13 19:23:26.966624 systemd[1]: Reloading requested from client PID 1826 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:23:26.966655 systemd[1]: Reloading... Apr 13 19:23:27.126768 zram_generator::config[1867]: No configuration found. Apr 13 19:23:27.186225 systemd-networkd[1694]: eth0: Gained IPv6LL Apr 13 19:23:27.227316 ldconfig[1822]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:23:27.388804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:27.540113 systemd[1]: Reloading finished in 572 ms. Apr 13 19:23:27.567965 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:23:27.571919 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:23:27.575126 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:23:27.592648 systemd[1]: Starting ensure-sysext.service... Apr 13 19:23:27.601686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:27.623545 systemd[1]: Reloading requested from client PID 1928 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:23:27.623577 systemd[1]: Reloading... Apr 13 19:23:27.664147 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:23:27.665569 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:23:27.669516 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:23:27.670289 systemd-tmpfiles[1929]: ACLs are not supported, ignoring. Apr 13 19:23:27.670626 systemd-tmpfiles[1929]: ACLs are not supported, ignoring. Apr 13 19:23:27.677428 systemd-tmpfiles[1929]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:27.677616 systemd-tmpfiles[1929]: Skipping /boot Apr 13 19:23:27.702126 systemd-tmpfiles[1929]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:27.702340 systemd-tmpfiles[1929]: Skipping /boot Apr 13 19:23:27.774601 zram_generator::config[1956]: No configuration found. Apr 13 19:23:28.015757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:28.168467 systemd[1]: Reloading finished in 544 ms. Apr 13 19:23:28.194324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:28.217758 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:28.229685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:23:28.247624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:23:28.263767 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:28.280018 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:23:28.304209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:28.315717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:28.334655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:28.346193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:28.353283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:28.369119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:28.369513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:28.395049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:28.396172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:28.408993 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:28.411959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:28.428252 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:23:28.451956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:28.461614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:28.483592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:28.489327 augenrules[2052]: No rules Apr 13 19:23:28.501173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:28.517824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:28.524755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:28.525180 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:23:28.542056 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:28.548726 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:23:28.555026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:28.555368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:28.562567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:28.562942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:28.583709 systemd[1]: Finished ensure-sysext.service. Apr 13 19:23:28.587431 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:28.587807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:28.595136 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:28.597906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:28.624886 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:23:28.626130 systemd-resolved[2026]: Positive Trust Anchors: Apr 13 19:23:28.626157 systemd-resolved[2026]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:28.626221 systemd-resolved[2026]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:28.631191 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:28.631388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:28.641945 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:23:28.645506 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:23:28.649298 systemd-resolved[2026]: Defaulting to hostname 'linux'. Apr 13 19:23:28.653159 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:28.658898 systemd[1]: Reached target network.target - Network. Apr 13 19:23:28.665577 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:23:28.671046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:28.676697 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:23:28.680628 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:28.683820 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:23:28.687181 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:23:28.690801 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:23:28.694053 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:23:28.697580 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:23:28.701031 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:23:28.701125 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:28.703511 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:28.707522 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:23:28.712873 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:23:28.719255 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:23:28.726270 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:23:28.729093 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:28.731660 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:28.734449 systemd[1]: System is tainted: cgroupsv1 Apr 13 19:23:28.734516 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:28.734566 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:28.740633 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:23:28.750003 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:23:28.757384 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:23:28.770637 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:23:28.787784 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:23:28.791722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:23:28.795930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:28.805470 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:23:28.828065 jq[2084]: false Apr 13 19:23:28.843190 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 19:23:28.852172 extend-filesystems[2085]: Found loop4 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found loop5 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found loop6 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found loop7 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1p1 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1p2 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1p3 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found usr Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1p4 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1p6 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1p7 Apr 13 19:23:28.856333 extend-filesystems[2085]: Found nvme0n1p9 Apr 13 19:23:28.856333 extend-filesystems[2085]: Checking size of /dev/nvme0n1p9 Apr 13 19:23:28.888383 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:23:28.906322 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:23:28.924591 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 19:23:28.933876 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:23:28.953196 dbus-daemon[2082]: [system] SELinux support is enabled Apr 13 19:23:28.957701 dbus-daemon[2082]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1694 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:28.965778 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:23:28.990868 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:23:28.995245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:23:29.012540 extend-filesystems[2085]: Resized partition /dev/nvme0n1p9 Apr 13 19:23:29.018717 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:23:29.028296 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:23:29.044148 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:23:29.083003 extend-filesystems[2113]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:23:29.088896 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:23:29.091274 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:23:29.127624 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 19:23:29.124627 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:23:29.125154 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:23:29.153875 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:23:29.173827 coreos-metadata[2081]: Apr 13 19:23:29.165 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:29.168466 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:23:29.167718 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: ---------------------------------------------------- Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: corporation. Support and training for ntp-4 are Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: available at https://www.nwtime.org/support Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: ---------------------------------------------------- Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: proto: precision = 0.096 usec (-23) Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: basedate set to 2026-04-01 Apr 13 19:23:29.196754 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:29.208655 jq[2115]: true Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.174 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.176 INFO Fetch successful Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.176 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.179 INFO Fetch successful Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.179 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.181 INFO Fetch successful Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.181 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.183 INFO Fetch successful Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.183 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.187 INFO Fetch failed with 404: resource not found Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.188 INFO Fetch successful Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.194 INFO Fetch successful Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.194 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.200 INFO Fetch successful Apr 13 19:23:29.208870 coreos-metadata[2081]: Apr 13 19:23:29.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 19:23:29.167766 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:29.313488 coreos-metadata[2081]: Apr 13 19:23:29.212 INFO Fetch successful Apr 13 19:23:29.313488 coreos-metadata[2081]: Apr 13 19:23:29.213 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 19:23:29.313488 coreos-metadata[2081]: Apr 13 19:23:29.213 INFO Fetch successful Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Listen normally on 3 eth0 172.31.26.195:123 Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Listen normally on 5 eth0 [fe80::4ef:7eff:fe17:fd69%2]:123 Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:29.313690 ntpd[2088]: 13 Apr 19:23:29 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:29.314052 tar[2125]: linux-arm64/LICENSE Apr 13 19:23:29.314052 tar[2125]: linux-arm64/helm Apr 13 19:23:29.273874 (ntainerd)[2133]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:23:29.167786 ntpd[2088]: ---------------------------------------------------- Apr 13 19:23:29.275069 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:23:29.167805 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:29.296276 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:23:29.167824 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:29.296324 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:23:29.167842 ntpd[2088]: corporation. Support and training for ntp-4 are Apr 13 19:23:29.311962 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 19:23:29.167864 ntpd[2088]: available at https://www.nwtime.org/support Apr 13 19:23:29.316737 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:23:29.167882 ntpd[2088]: ---------------------------------------------------- Apr 13 19:23:29.316777 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:23:29.180830 ntpd[2088]: proto: precision = 0.096 usec (-23) Apr 13 19:23:29.184759 ntpd[2088]: basedate set to 2026-04-01 Apr 13 19:23:29.184791 ntpd[2088]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:29.213485 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:29.213564 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:29.213826 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:29.213892 ntpd[2088]: Listen normally on 3 eth0 172.31.26.195:123 Apr 13 19:23:29.213959 ntpd[2088]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:29.214030 ntpd[2088]: Listen normally on 5 eth0 [fe80::4ef:7eff:fe17:fd69%2]:123 Apr 13 19:23:29.214091 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Apr 13 19:23:29.288304 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 19:23:29.309655 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:29.309707 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:29.390039 jq[2137]: true Apr 13 19:23:29.407430 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 19:23:29.444187 update_engine[2114]: I20260413 19:23:29.442301 2114 main.cc:92] Flatcar Update Engine starting Apr 13 19:23:29.468089 update_engine[2114]: I20260413 19:23:29.464458 2114 update_check_scheduler.cc:74] Next update check in 2m4s Apr 13 19:23:29.488540 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 19:23:29.495423 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 19:23:29.527137 extend-filesystems[2113]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 19:23:29.527137 extend-filesystems[2113]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 19:23:29.527137 extend-filesystems[2113]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 19:23:29.547783 extend-filesystems[2085]: Resized filesystem in /dev/nvme0n1p9 Apr 13 19:23:29.559823 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:23:29.560334 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:23:29.566799 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:23:29.576242 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:23:29.584527 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:23:29.586852 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:23:29.591708 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:23:29.620655 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2182) Apr 13 19:23:29.796261 bash[2218]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:29.789220 systemd-logind[2104]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:23:29.789257 systemd-logind[2104]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 13 19:23:29.795217 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:23:29.805124 systemd-logind[2104]: New seat seat0. Apr 13 19:23:29.896943 systemd[1]: Starting sshkeys.service... Apr 13 19:23:29.957219 amazon-ssm-agent[2181]: Initializing new seelog logger Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: New Seelog Logger Creation Complete Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026-04-13 19:23:29 INFO Proxy environment variables: Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:30.015583 amazon-ssm-agent[2181]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:30.005348 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:23:30.077521 amazon-ssm-agent[2181]: 2026-04-13 19:23:29 INFO https_proxy: Apr 13 19:23:30.097534 containerd[2133]: time="2026-04-13T19:23:30.095242569Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:23:30.155738 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 19:23:30.159847 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 19:23:30.169292 dbus-daemon[2082]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2157 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:30.181020 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:23:30.189722 amazon-ssm-agent[2181]: 2026-04-13 19:23:29 INFO http_proxy: Apr 13 19:23:30.266389 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:23:30.275996 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 19:23:30.303679 amazon-ssm-agent[2181]: 2026-04-13 19:23:29 INFO no_proxy: Apr 13 19:23:30.373335 polkitd[2292]: Started polkitd version 121 Apr 13 19:23:30.396925 amazon-ssm-agent[2181]: 2026-04-13 19:23:29 INFO Checking if agent identity type OnPrem can be assumed Apr 13 19:23:30.418512 containerd[2133]: time="2026-04-13T19:23:30.417988679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.431717 containerd[2133]: time="2026-04-13T19:23:30.431624303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.431717 containerd[2133]: time="2026-04-13T19:23:30.431702471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:23:30.431868 containerd[2133]: time="2026-04-13T19:23:30.431763635Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432090611Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432137123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432260483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432290411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432657443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432689279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432719003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432746951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.432900527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.434416 containerd[2133]: time="2026-04-13T19:23:30.433298759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.436129 containerd[2133]: time="2026-04-13T19:23:30.435662267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.436129 containerd[2133]: time="2026-04-13T19:23:30.435719267Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:23:30.436129 containerd[2133]: time="2026-04-13T19:23:30.435923867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:23:30.436129 containerd[2133]: time="2026-04-13T19:23:30.436021835Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:23:30.442922 polkitd[2292]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 19:23:30.446528 polkitd[2292]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 19:23:30.447928 containerd[2133]: time="2026-04-13T19:23:30.447804851Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:23:30.448051 containerd[2133]: time="2026-04-13T19:23:30.447983711Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:23:30.448103 containerd[2133]: time="2026-04-13T19:23:30.448044551Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:23:30.448151 containerd[2133]: time="2026-04-13T19:23:30.448112315Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:23:30.454643 containerd[2133]: time="2026-04-13T19:23:30.448150199Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:23:30.454643 containerd[2133]: time="2026-04-13T19:23:30.450811955Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:23:30.453375 polkitd[2292]: Finished loading, compiling and executing 2 rules Apr 13 19:23:30.458829 containerd[2133]: time="2026-04-13T19:23:30.458745347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:23:30.459280 containerd[2133]: time="2026-04-13T19:23:30.459205271Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:23:30.459357 containerd[2133]: time="2026-04-13T19:23:30.459283847Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:23:30.459357 containerd[2133]: time="2026-04-13T19:23:30.459342083Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:23:30.460852 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.459378635Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.461465723Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.461537087Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.461607467Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.461645063Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.461701583Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.461735903Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.462231 containerd[2133]: time="2026-04-13T19:23:30.461789039Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.463877 containerd[2133]: time="2026-04-13T19:23:30.461833595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.464514 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 19:23:30.464703 polkitd[2292]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476506187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476606003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476669123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476728211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476765855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476822531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476860055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476920451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.476960735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.477037559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.477099083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.477133499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.477197267Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.477273767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478067 containerd[2133]: time="2026-04-13T19:23:30.477307331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.478808 containerd[2133]: time="2026-04-13T19:23:30.477360575Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.482141 containerd[2133]: time="2026-04-13T19:23:30.480562031Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:23:30.482141 containerd[2133]: time="2026-04-13T19:23:30.480663431Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:23:30.482141 containerd[2133]: time="2026-04-13T19:23:30.480731075Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.482510 containerd[2133]: time="2026-04-13T19:23:30.480766415Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:23:30.482579 containerd[2133]: time="2026-04-13T19:23:30.482528807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.482628 containerd[2133]: time="2026-04-13T19:23:30.482579555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:23:30.482692 containerd[2133]: time="2026-04-13T19:23:30.482636231Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:23:30.482742 containerd[2133]: time="2026-04-13T19:23:30.482670947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.485644 containerd[2133]: time="2026-04-13T19:23:30.484650119Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:23:30.488013 containerd[2133]: time="2026-04-13T19:23:30.486461843Z" level=info msg="Connect containerd service" Apr 13 19:23:30.488013 containerd[2133]: time="2026-04-13T19:23:30.486578195Z" level=info msg="using legacy CRI server" Apr 13 19:23:30.488013 containerd[2133]: time="2026-04-13T19:23:30.486600887Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:23:30.488013 containerd[2133]: time="2026-04-13T19:23:30.486804467Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:23:30.491583 containerd[2133]: time="2026-04-13T19:23:30.491512667Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:23:30.493246 containerd[2133]: time="2026-04-13T19:23:30.491859095Z" level=info msg="Start subscribing containerd event" Apr 13 19:23:30.493246 containerd[2133]: time="2026-04-13T19:23:30.491951663Z" level=info msg="Start recovering state" Apr 13 19:23:30.493246 containerd[2133]: time="2026-04-13T19:23:30.492180035Z" level=info msg="Start event monitor" Apr 13 19:23:30.493246 containerd[2133]: time="2026-04-13T19:23:30.492208187Z" level=info msg="Start snapshots syncer" Apr 13 19:23:30.493246 containerd[2133]: time="2026-04-13T19:23:30.492230531Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:23:30.493246 containerd[2133]: time="2026-04-13T19:23:30.492249407Z" level=info msg="Start streaming server" Apr 13 19:23:30.498624 containerd[2133]: time="2026-04-13T19:23:30.494723099Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:23:30.498624 containerd[2133]: time="2026-04-13T19:23:30.494846015Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:23:30.498811 amazon-ssm-agent[2181]: 2026-04-13 19:23:29 INFO Checking if agent identity type EC2 can be assumed Apr 13 19:23:30.495100 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:23:30.525717 containerd[2133]: time="2026-04-13T19:23:30.524912339Z" level=info msg="containerd successfully booted in 0.442162s" Apr 13 19:23:30.540699 systemd-hostnamed[2157]: Hostname set to (transient) Apr 13 19:23:30.540724 systemd-resolved[2026]: System hostname changed to 'ip-172-31-26-195'. Apr 13 19:23:30.598089 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO Agent will take identity from EC2 Apr 13 19:23:30.655638 coreos-metadata[2271]: Apr 13 19:23:30.655 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:30.658430 coreos-metadata[2271]: Apr 13 19:23:30.656 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 19:23:30.658430 coreos-metadata[2271]: Apr 13 19:23:30.657 INFO Fetch successful Apr 13 19:23:30.658430 coreos-metadata[2271]: Apr 13 19:23:30.657 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 19:23:30.663517 coreos-metadata[2271]: Apr 13 19:23:30.663 INFO Fetch successful Apr 13 19:23:30.668556 unknown[2271]: wrote ssh authorized keys file for user: core Apr 13 19:23:30.694150 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:30.722694 update-ssh-keys[2322]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:30.731834 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:23:30.752113 systemd[1]: Finished sshkeys.service. Apr 13 19:23:30.793521 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:30.888759 locksmithd[2196]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:23:30.892808 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:30.992094 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 19:23:31.092311 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 13 19:23:31.113050 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:23:31.196478 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 19:23:31.296994 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 19:23:31.397215 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [Registrar] Starting registrar module Apr 13 19:23:31.497679 amazon-ssm-agent[2181]: 2026-04-13 19:23:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 19:23:31.650174 amazon-ssm-agent[2181]: 2026-04-13 19:23:31 INFO [EC2Identity] EC2 registration was successful. Apr 13 19:23:31.687167 tar[2125]: linux-arm64/README.md Apr 13 19:23:31.688771 amazon-ssm-agent[2181]: 2026-04-13 19:23:31 INFO [CredentialRefresher] credentialRefresher has started Apr 13 19:23:31.688771 amazon-ssm-agent[2181]: 2026-04-13 19:23:31 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 19:23:31.688771 amazon-ssm-agent[2181]: 2026-04-13 19:23:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 19:23:31.727438 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:23:31.751357 amazon-ssm-agent[2181]: 2026-04-13 19:23:31 INFO [CredentialRefresher] Next credential rotation will be in 30.841659108066665 minutes Apr 13 19:23:31.826772 sshd_keygen[2142]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:23:31.875883 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:23:31.893888 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:23:31.907898 systemd[1]: Started sshd@0-172.31.26.195:22-4.175.71.9:53070.service - OpenSSH per-connection server daemon (4.175.71.9:53070). Apr 13 19:23:31.927358 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:23:31.927897 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:23:31.944835 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:23:31.980242 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:23:31.994990 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:23:32.013937 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 19:23:32.019167 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:23:32.072843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:32.079542 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:23:32.083570 systemd[1]: Startup finished in 9.070s (kernel) + 9.426s (userspace) = 18.497s. Apr 13 19:23:32.091141 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:32.716922 amazon-ssm-agent[2181]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 19:23:32.819344 amazon-ssm-agent[2181]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2385) started Apr 13 19:23:32.918865 amazon-ssm-agent[2181]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 19:23:32.953990 sshd[2355]: Accepted publickey for core from 4.175.71.9 port 53070 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:32.956682 sshd[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:32.976432 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:23:32.986850 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:23:32.996646 systemd-logind[2104]: New session 1 of user core. Apr 13 19:23:33.033884 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:23:33.036780 kubelet[2374]: E0413 19:23:33.036065 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:33.047985 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:23:33.049873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:33.050214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:33.074025 (systemd)[2398]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:23:33.308770 systemd[2398]: Queued start job for default target default.target. Apr 13 19:23:33.309995 systemd[2398]: Created slice app.slice - User Application Slice. Apr 13 19:23:33.310040 systemd[2398]: Reached target paths.target - Paths. Apr 13 19:23:33.310072 systemd[2398]: Reached target timers.target - Timers. Apr 13 19:23:33.327551 systemd[2398]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:23:33.341499 systemd[2398]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:23:33.341806 systemd[2398]: Reached target sockets.target - Sockets. Apr 13 19:23:33.341942 systemd[2398]: Reached target basic.target - Basic System. Apr 13 19:23:33.342043 systemd[2398]: Reached target default.target - Main User Target. Apr 13 19:23:33.342104 systemd[2398]: Startup finished in 256ms. Apr 13 19:23:33.342745 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:23:33.349964 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:23:34.043888 systemd[1]: Started sshd@1-172.31.26.195:22-4.175.71.9:53086.service - OpenSSH per-connection server daemon (4.175.71.9:53086). Apr 13 19:23:35.024555 sshd[2412]: Accepted publickey for core from 4.175.71.9 port 53086 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:35.027145 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:35.035494 systemd-logind[2104]: New session 2 of user core. Apr 13 19:23:35.046870 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:23:35.696720 sshd[2412]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:35.703772 systemd[1]: sshd@1-172.31.26.195:22-4.175.71.9:53086.service: Deactivated successfully. Apr 13 19:23:35.710056 systemd-logind[2104]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:23:35.711239 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:23:35.714614 systemd-logind[2104]: Removed session 2. Apr 13 19:23:35.878884 systemd[1]: Started sshd@2-172.31.26.195:22-4.175.71.9:47104.service - OpenSSH per-connection server daemon (4.175.71.9:47104). Apr 13 19:23:36.480226 systemd-resolved[2026]: Clock change detected. Flushing caches. Apr 13 19:23:37.225746 sshd[2420]: Accepted publickey for core from 4.175.71.9 port 47104 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:37.227401 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:37.236616 systemd-logind[2104]: New session 3 of user core. Apr 13 19:23:37.243222 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:23:37.932019 sshd[2420]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:37.938824 systemd[1]: sshd@2-172.31.26.195:22-4.175.71.9:47104.service: Deactivated successfully. Apr 13 19:23:37.944036 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:23:37.945548 systemd-logind[2104]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:23:37.948002 systemd-logind[2104]: Removed session 3. Apr 13 19:23:38.089176 systemd[1]: Started sshd@3-172.31.26.195:22-4.175.71.9:47106.service - OpenSSH per-connection server daemon (4.175.71.9:47106). Apr 13 19:23:39.070956 sshd[2429]: Accepted publickey for core from 4.175.71.9 port 47106 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:39.074089 sshd[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:39.082928 systemd-logind[2104]: New session 4 of user core. Apr 13 19:23:39.086308 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:23:39.745978 sshd[2429]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:39.753587 systemd[1]: sshd@3-172.31.26.195:22-4.175.71.9:47106.service: Deactivated successfully. Apr 13 19:23:39.759142 systemd-logind[2104]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:23:39.760348 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:23:39.761833 systemd-logind[2104]: Removed session 4. Apr 13 19:23:39.915196 systemd[1]: Started sshd@4-172.31.26.195:22-4.175.71.9:47120.service - OpenSSH per-connection server daemon (4.175.71.9:47120). Apr 13 19:23:40.910266 sshd[2437]: Accepted publickey for core from 4.175.71.9 port 47120 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:40.912807 sshd[2437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:40.919841 systemd-logind[2104]: New session 5 of user core. Apr 13 19:23:40.929293 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:23:41.451567 sudo[2441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:23:41.452252 sudo[2441]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:41.469635 sudo[2441]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:41.630102 sshd[2437]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:41.639800 systemd-logind[2104]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:23:41.640080 systemd[1]: sshd@4-172.31.26.195:22-4.175.71.9:47120.service: Deactivated successfully. Apr 13 19:23:41.646235 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:23:41.647758 systemd-logind[2104]: Removed session 5. Apr 13 19:23:41.804172 systemd[1]: Started sshd@5-172.31.26.195:22-4.175.71.9:47128.service - OpenSSH per-connection server daemon (4.175.71.9:47128). Apr 13 19:23:42.824731 sshd[2446]: Accepted publickey for core from 4.175.71.9 port 47128 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:42.826914 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:42.834824 systemd-logind[2104]: New session 6 of user core. Apr 13 19:23:42.844161 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:23:43.366126 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:23:43.366807 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:43.368101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:43.380232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:43.383660 sudo[2451]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:43.400341 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:23:43.401090 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:43.429898 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:43.436957 auditctl[2458]: No rules Apr 13 19:23:43.438758 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:23:43.439323 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:43.455023 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:43.522438 augenrules[2477]: No rules Apr 13 19:23:43.528512 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:43.534375 sudo[2450]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:43.699818 sshd[2446]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:43.708942 systemd[1]: sshd@5-172.31.26.195:22-4.175.71.9:47128.service: Deactivated successfully. Apr 13 19:23:43.714602 systemd-logind[2104]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:23:43.715222 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:23:43.719466 systemd-logind[2104]: Removed session 6. Apr 13 19:23:43.768010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:43.783279 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:43.848957 kubelet[2494]: E0413 19:23:43.848874 2494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:43.856994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:43.857400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:43.876236 systemd[1]: Started sshd@6-172.31.26.195:22-4.175.71.9:47138.service - OpenSSH per-connection server daemon (4.175.71.9:47138). Apr 13 19:23:44.917136 sshd[2502]: Accepted publickey for core from 4.175.71.9 port 47138 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:44.919636 sshd[2502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:44.928645 systemd-logind[2104]: New session 7 of user core. Apr 13 19:23:44.937178 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:23:45.467144 sudo[2506]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:23:45.467875 sudo[2506]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:45.961131 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:23:45.963190 (dockerd)[2522]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:23:46.374828 dockerd[2522]: time="2026-04-13T19:23:46.373327701Z" level=info msg="Starting up" Apr 13 19:23:46.513082 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3821933771-merged.mount: Deactivated successfully. Apr 13 19:23:46.617730 dockerd[2522]: time="2026-04-13T19:23:46.617155906Z" level=info msg="Loading containers: start." Apr 13 19:23:46.776736 kernel: Initializing XFRM netlink socket Apr 13 19:23:46.811060 (udev-worker)[2544]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:46.894949 systemd-networkd[1694]: docker0: Link UP Apr 13 19:23:46.919075 dockerd[2522]: time="2026-04-13T19:23:46.918993059Z" level=info msg="Loading containers: done." Apr 13 19:23:46.952425 dockerd[2522]: time="2026-04-13T19:23:46.952361856Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:23:46.952802 dockerd[2522]: time="2026-04-13T19:23:46.952517256Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:23:46.952802 dockerd[2522]: time="2026-04-13T19:23:46.952735404Z" level=info msg="Daemon has completed initialization" Apr 13 19:23:47.019472 dockerd[2522]: time="2026-04-13T19:23:47.018110072Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:23:47.018924 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:23:47.503835 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2425195810-merged.mount: Deactivated successfully. Apr 13 19:23:47.829678 containerd[2133]: time="2026-04-13T19:23:47.829534572Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 19:23:48.641630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098480522.mount: Deactivated successfully. Apr 13 19:23:50.034650 containerd[2133]: time="2026-04-13T19:23:50.034570223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.036894 containerd[2133]: time="2026-04-13T19:23:50.036822947Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=27283683" Apr 13 19:23:50.039447 containerd[2133]: time="2026-04-13T19:23:50.039024527Z" level=info msg="ImageCreate event name:\"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.047719 containerd[2133]: time="2026-04-13T19:23:50.047642099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.050125 containerd[2133]: time="2026-04-13T19:23:50.050056103Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"27280282\" in 2.220462251s" Apr 13 19:23:50.050243 containerd[2133]: time="2026-04-13T19:23:50.050123411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\"" Apr 13 19:23:50.051126 containerd[2133]: time="2026-04-13T19:23:50.051061859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 19:23:51.453714 containerd[2133]: time="2026-04-13T19:23:51.451957634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:51.454411 containerd[2133]: time="2026-04-13T19:23:51.454350962Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=23551902" Apr 13 19:23:51.455608 containerd[2133]: time="2026-04-13T19:23:51.455565074Z" level=info msg="ImageCreate event name:\"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:51.462472 containerd[2133]: time="2026-04-13T19:23:51.462403346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:51.465049 containerd[2133]: time="2026-04-13T19:23:51.465000290Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"25029924\" in 1.413879931s" Apr 13 19:23:51.465227 containerd[2133]: time="2026-04-13T19:23:51.465194978Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\"" Apr 13 19:23:51.466860 containerd[2133]: time="2026-04-13T19:23:51.466812794Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 19:23:53.003735 containerd[2133]: time="2026-04-13T19:23:53.003653258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.006443 containerd[2133]: time="2026-04-13T19:23:53.006165650Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=18301233" Apr 13 19:23:53.008263 containerd[2133]: time="2026-04-13T19:23:53.008187314Z" level=info msg="ImageCreate event name:\"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.017390 containerd[2133]: time="2026-04-13T19:23:53.017010362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.019412 containerd[2133]: time="2026-04-13T19:23:53.019225826Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"19779273\" in 1.552354832s" Apr 13 19:23:53.019412 containerd[2133]: time="2026-04-13T19:23:53.019279406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\"" Apr 13 19:23:53.020301 containerd[2133]: time="2026-04-13T19:23:53.020192486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 19:23:54.046636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:23:54.055204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:54.420062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:54.431754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705497531.mount: Deactivated successfully. Apr 13 19:23:54.439644 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:54.525659 kubelet[2744]: E0413 19:23:54.525519 2744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:54.531562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:54.533153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:55.074794 containerd[2133]: time="2026-04-13T19:23:55.074713024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.082911 containerd[2133]: time="2026-04-13T19:23:55.082547368Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=28148953" Apr 13 19:23:55.091417 containerd[2133]: time="2026-04-13T19:23:55.091350424Z" level=info msg="ImageCreate event name:\"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.103942 containerd[2133]: time="2026-04-13T19:23:55.103859560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.105963 containerd[2133]: time="2026-04-13T19:23:55.105911596Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"28147972\" in 2.085455386s" Apr 13 19:23:55.106414 containerd[2133]: time="2026-04-13T19:23:55.106117504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\"" Apr 13 19:23:55.107076 containerd[2133]: time="2026-04-13T19:23:55.107017096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 19:23:55.673080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427408126.mount: Deactivated successfully. Apr 13 19:23:56.921072 containerd[2133]: time="2026-04-13T19:23:56.920990169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.924153 containerd[2133]: time="2026-04-13T19:23:56.924069117Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 13 19:23:56.926057 containerd[2133]: time="2026-04-13T19:23:56.925989741Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.931714 containerd[2133]: time="2026-04-13T19:23:56.931167945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.934331 containerd[2133]: time="2026-04-13T19:23:56.934138953Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.827055293s" Apr 13 19:23:56.934331 containerd[2133]: time="2026-04-13T19:23:56.934192305Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 13 19:23:56.935396 containerd[2133]: time="2026-04-13T19:23:56.935142201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 19:23:57.454679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541811675.mount: Deactivated successfully. Apr 13 19:23:57.461797 containerd[2133]: time="2026-04-13T19:23:57.461738240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.463380 containerd[2133]: time="2026-04-13T19:23:57.463298036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 13 19:23:57.464552 containerd[2133]: time="2026-04-13T19:23:57.463967432Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.468720 containerd[2133]: time="2026-04-13T19:23:57.468185120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.470191 containerd[2133]: time="2026-04-13T19:23:57.469997948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.801183ms" Apr 13 19:23:57.470191 containerd[2133]: time="2026-04-13T19:23:57.470058032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 13 19:23:57.471646 containerd[2133]: time="2026-04-13T19:23:57.471129032Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 19:23:58.207782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716068718.mount: Deactivated successfully. Apr 13 19:24:00.289028 containerd[2133]: time="2026-04-13T19:24:00.287736694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.290725 containerd[2133]: time="2026-04-13T19:24:00.289977382Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Apr 13 19:24:00.290725 containerd[2133]: time="2026-04-13T19:24:00.290257714Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.296430 containerd[2133]: time="2026-04-13T19:24:00.296347186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.299016 containerd[2133]: time="2026-04-13T19:24:00.298950226Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.827770542s" Apr 13 19:24:00.299133 containerd[2133]: time="2026-04-13T19:24:00.299014666Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 13 19:24:00.877232 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 19:24:04.547677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:24:04.558164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:04.897918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:04.915620 (kubelet)[2914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:04.989979 kubelet[2914]: E0413 19:24:04.989901 2914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:04.994372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:04.995559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:08.926008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:08.943404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:09.001847 systemd[1]: Reloading requested from client PID 2930 ('systemctl') (unit session-7.scope)... Apr 13 19:24:09.001877 systemd[1]: Reloading... Apr 13 19:24:09.250765 zram_generator::config[2973]: No configuration found. Apr 13 19:24:09.546615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:09.731968 systemd[1]: Reloading finished in 729 ms. Apr 13 19:24:09.812857 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:24:09.813091 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:24:09.814035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:09.826349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:10.210179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:10.226457 (kubelet)[3042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:10.301966 kubelet[3042]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:10.301966 kubelet[3042]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:10.301966 kubelet[3042]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:10.304776 kubelet[3042]: I0413 19:24:10.302034 3042 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:12.481003 kubelet[3042]: I0413 19:24:12.480936 3042 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:24:12.481003 kubelet[3042]: I0413 19:24:12.480986 3042 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:12.481673 kubelet[3042]: I0413 19:24:12.481353 3042 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:12.520562 kubelet[3042]: E0413 19:24:12.520485 3042 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.195:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:12.524106 kubelet[3042]: I0413 19:24:12.524020 3042 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:12.543532 kubelet[3042]: E0413 19:24:12.542164 3042 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:12.543532 kubelet[3042]: I0413 19:24:12.542224 3042 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:12.549821 kubelet[3042]: I0413 19:24:12.549779 3042 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:24:12.550822 kubelet[3042]: I0413 19:24:12.550777 3042 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:12.551200 kubelet[3042]: I0413 19:24:12.550950 3042 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-195","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:24:12.551428 kubelet[3042]: I0413 19:24:12.551407 3042 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:12.551541 kubelet[3042]: I0413 19:24:12.551522 3042 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:24:12.552070 kubelet[3042]: I0413 19:24:12.552045 3042 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:12.558039 kubelet[3042]: I0413 19:24:12.558001 3042 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:24:12.558235 kubelet[3042]: I0413 19:24:12.558214 3042 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:12.558360 kubelet[3042]: I0413 19:24:12.558342 3042 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:24:12.558482 kubelet[3042]: I0413 19:24:12.558464 3042 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:12.566497 kubelet[3042]: E0413 19:24:12.566436 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-195&limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:12.566661 kubelet[3042]: I0413 19:24:12.566611 3042 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:12.567923 kubelet[3042]: I0413 19:24:12.567877 3042 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:12.568202 kubelet[3042]: W0413 19:24:12.568164 3042 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:24:12.574357 kubelet[3042]: I0413 19:24:12.574312 3042 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:24:12.574468 kubelet[3042]: I0413 19:24:12.574386 3042 server.go:1289] "Started kubelet" Apr 13 19:24:12.579774 kubelet[3042]: E0413 19:24:12.578216 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:12.579774 kubelet[3042]: I0413 19:24:12.578345 3042 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:12.581186 kubelet[3042]: I0413 19:24:12.581084 3042 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:12.581720 kubelet[3042]: I0413 19:24:12.581656 3042 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:12.583031 kubelet[3042]: I0413 19:24:12.582987 3042 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:12.589161 kubelet[3042]: E0413 19:24:12.586913 3042 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.195:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.195:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-195.18a6010ba555637b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-195,UID:ip-172-31-26-195,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-195,},FirstTimestamp:2026-04-13 19:24:12.574344059 +0000 UTC m=+2.340177517,LastTimestamp:2026-04-13 19:24:12.574344059 +0000 UTC m=+2.340177517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-195,}" Apr 13 19:24:12.592812 kubelet[3042]: I0413 19:24:12.592759 3042 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:24:12.594500 kubelet[3042]: I0413 19:24:12.594430 3042 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:12.596718 kubelet[3042]: I0413 19:24:12.596087 3042 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:24:12.596718 kubelet[3042]: E0413 19:24:12.596448 3042 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-195\" not found" Apr 13 19:24:12.597276 kubelet[3042]: I0413 19:24:12.597253 3042 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:24:12.597494 kubelet[3042]: I0413 19:24:12.597477 3042 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:24:12.599628 kubelet[3042]: E0413 19:24:12.598883 3042 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-195?timeout=10s\": dial tcp 172.31.26.195:6443: connect: connection refused" interval="200ms" Apr 13 19:24:12.599833 kubelet[3042]: E0413 19:24:12.599660 3042 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:12.602408 kubelet[3042]: E0413 19:24:12.602344 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:12.603190 kubelet[3042]: I0413 19:24:12.603143 3042 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:12.603190 kubelet[3042]: I0413 19:24:12.603179 3042 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:12.603409 kubelet[3042]: I0413 19:24:12.603355 3042 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:12.667155 kubelet[3042]: I0413 19:24:12.666888 3042 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:12.672273 kubelet[3042]: I0413 19:24:12.670864 3042 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:12.672273 kubelet[3042]: I0413 19:24:12.670914 3042 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:12.672273 kubelet[3042]: I0413 19:24:12.670964 3042 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:12.672813 kubelet[3042]: I0413 19:24:12.672773 3042 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:12.674988 kubelet[3042]: I0413 19:24:12.672934 3042 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:24:12.674988 kubelet[3042]: I0413 19:24:12.672979 3042 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:12.674988 kubelet[3042]: I0413 19:24:12.672997 3042 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:24:12.674988 kubelet[3042]: E0413 19:24:12.673074 3042 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:12.677378 kubelet[3042]: I0413 19:24:12.677314 3042 policy_none.go:49] "None policy: Start" Apr 13 19:24:12.677378 kubelet[3042]: I0413 19:24:12.677363 3042 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:24:12.677605 kubelet[3042]: I0413 19:24:12.677393 3042 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:24:12.680122 kubelet[3042]: E0413 19:24:12.680041 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:12.695427 kubelet[3042]: E0413 19:24:12.695353 3042 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:12.695777 kubelet[3042]: I0413 19:24:12.695726 3042 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:12.695884 kubelet[3042]: I0413 19:24:12.695770 3042 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:12.699899 kubelet[3042]: I0413 19:24:12.699835 3042 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:12.702007 kubelet[3042]: E0413 19:24:12.701952 3042 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:12.702150 kubelet[3042]: E0413 19:24:12.702026 3042 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-195\" not found" Apr 13 19:24:12.792529 kubelet[3042]: E0413 19:24:12.792387 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:12.801808 kubelet[3042]: E0413 19:24:12.801555 3042 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-195?timeout=10s\": dial tcp 172.31.26.195:6443: connect: connection refused" interval="400ms" Apr 13 19:24:12.804812 kubelet[3042]: I0413 19:24:12.804600 3042 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-195" Apr 13 19:24:12.805408 kubelet[3042]: E0413 19:24:12.805143 3042 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.195:6443/api/v1/nodes\": dial tcp 172.31.26.195:6443: connect: connection refused" node="ip-172-31-26-195" Apr 13 19:24:12.809824 kubelet[3042]: E0413 19:24:12.809595 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:12.811895 kubelet[3042]: E0413 19:24:12.811805 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:12.902470 kubelet[3042]: I0413 19:24:12.902358 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf3cb19941c8508ad79c22ddebd04277-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-195\" (UID: \"cf3cb19941c8508ad79c22ddebd04277\") " pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:12.902470 kubelet[3042]: I0413 19:24:12.902433 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:12.902470 kubelet[3042]: I0413 19:24:12.902473 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:12.902470 kubelet[3042]: I0413 19:24:12.902533 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59db50c4565dd218f53839f72c209247-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-195\" (UID: \"59db50c4565dd218f53839f72c209247\") " pod="kube-system/kube-scheduler-ip-172-31-26-195" Apr 13 19:24:12.902980 kubelet[3042]: I0413 19:24:12.902569 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf3cb19941c8508ad79c22ddebd04277-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-195\" (UID: \"cf3cb19941c8508ad79c22ddebd04277\") " pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:12.902980 kubelet[3042]: I0413 19:24:12.902605 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:12.902980 kubelet[3042]: I0413 19:24:12.902643 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:12.902980 kubelet[3042]: I0413 19:24:12.902677 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:12.902980 kubelet[3042]: I0413 19:24:12.902741 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf3cb19941c8508ad79c22ddebd04277-ca-certs\") pod \"kube-apiserver-ip-172-31-26-195\" (UID: \"cf3cb19941c8508ad79c22ddebd04277\") " pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:13.008172 kubelet[3042]: I0413 19:24:13.007667 3042 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-195" Apr 13 19:24:13.008379 kubelet[3042]: E0413 19:24:13.008316 3042 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.195:6443/api/v1/nodes\": dial tcp 172.31.26.195:6443: connect: connection refused" node="ip-172-31-26-195" Apr 13 19:24:13.095061 containerd[2133]: time="2026-04-13T19:24:13.095000769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-195,Uid:cf3cb19941c8508ad79c22ddebd04277,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:13.112065 containerd[2133]: time="2026-04-13T19:24:13.111601377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-195,Uid:605401f7cde1a970aa588cff29b27ba8,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:13.114225 containerd[2133]: time="2026-04-13T19:24:13.113906757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-195,Uid:59db50c4565dd218f53839f72c209247,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:13.203151 kubelet[3042]: E0413 19:24:13.203072 3042 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-195?timeout=10s\": dial tcp 172.31.26.195:6443: connect: connection refused" interval="800ms" Apr 13 19:24:13.412004 kubelet[3042]: I0413 19:24:13.411327 3042 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-195" Apr 13 19:24:13.412004 kubelet[3042]: E0413 19:24:13.411793 3042 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.195:6443/api/v1/nodes\": dial tcp 172.31.26.195:6443: connect: connection refused" node="ip-172-31-26-195" Apr 13 19:24:13.673890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902197641.mount: Deactivated successfully. Apr 13 19:24:13.688535 containerd[2133]: time="2026-04-13T19:24:13.686816952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.694465 containerd[2133]: time="2026-04-13T19:24:13.694375752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 19:24:13.696409 containerd[2133]: time="2026-04-13T19:24:13.696325920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.699744 containerd[2133]: time="2026-04-13T19:24:13.699355272Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.703059 containerd[2133]: time="2026-04-13T19:24:13.702980928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:13.703980 containerd[2133]: time="2026-04-13T19:24:13.703913352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.705615 containerd[2133]: time="2026-04-13T19:24:13.705545136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:13.707981 containerd[2133]: time="2026-04-13T19:24:13.707725008Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:13.709337 containerd[2133]: time="2026-04-13T19:24:13.709269420Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 614.152671ms" Apr 13 19:24:13.721772 containerd[2133]: time="2026-04-13T19:24:13.721679208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.661295ms" Apr 13 19:24:13.730580 containerd[2133]: time="2026-04-13T19:24:13.730520605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 618.777784ms" Apr 13 19:24:13.756642 kubelet[3042]: E0413 19:24:13.755596 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:13.823479 kubelet[3042]: E0413 19:24:13.823391 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-195&limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:13.861115 kubelet[3042]: E0413 19:24:13.860998 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:13.863329 kubelet[3042]: E0413 19:24:13.863250 3042 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:13.945459 containerd[2133]: time="2026-04-13T19:24:13.945100454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:13.945459 containerd[2133]: time="2026-04-13T19:24:13.945206558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:13.945459 containerd[2133]: time="2026-04-13T19:24:13.945246278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.946377 containerd[2133]: time="2026-04-13T19:24:13.946216598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.953288 containerd[2133]: time="2026-04-13T19:24:13.952535162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:13.953288 containerd[2133]: time="2026-04-13T19:24:13.952630238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:13.953288 containerd[2133]: time="2026-04-13T19:24:13.952667966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.953288 containerd[2133]: time="2026-04-13T19:24:13.952857422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.967613 containerd[2133]: time="2026-04-13T19:24:13.966923810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:13.967613 containerd[2133]: time="2026-04-13T19:24:13.967078934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:13.967613 containerd[2133]: time="2026-04-13T19:24:13.967115090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:13.967613 containerd[2133]: time="2026-04-13T19:24:13.967422566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:14.004036 kubelet[3042]: E0413 19:24:14.003981 3042 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-195?timeout=10s\": dial tcp 172.31.26.195:6443: connect: connection refused" interval="1.6s" Apr 13 19:24:14.110805 containerd[2133]: time="2026-04-13T19:24:14.110545942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-195,Uid:59db50c4565dd218f53839f72c209247,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ff032a0dc0fe740fb8882e18f4c81d114a0662027e19025776e48fc3a160d1\"" Apr 13 19:24:14.131379 containerd[2133]: time="2026-04-13T19:24:14.128943227Z" level=info msg="CreateContainer within sandbox \"66ff032a0dc0fe740fb8882e18f4c81d114a0662027e19025776e48fc3a160d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:24:14.157617 containerd[2133]: time="2026-04-13T19:24:14.157562339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-195,Uid:605401f7cde1a970aa588cff29b27ba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4e217f373e7b751203f7150f13bb8ef0e01264c66d70a5f46a53e331661f049\"" Apr 13 19:24:14.165739 containerd[2133]: time="2026-04-13T19:24:14.165653003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-195,Uid:cf3cb19941c8508ad79c22ddebd04277,Namespace:kube-system,Attempt:0,} returns sandbox id \"440ca907bbd6d2e8832d1e4bc70814177619f6d3a1e84d9dd7efa6f609d95355\"" Apr 13 19:24:14.169149 containerd[2133]: time="2026-04-13T19:24:14.169053035Z" level=info msg="CreateContainer within sandbox \"66ff032a0dc0fe740fb8882e18f4c81d114a0662027e19025776e48fc3a160d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c161f79b196d662ef7e02f9e85ece475ebb327e1dff3fcaa1282e3b6f748edb2\"" Apr 13 19:24:14.170993 containerd[2133]: time="2026-04-13T19:24:14.170937323Z" level=info msg="StartContainer for \"c161f79b196d662ef7e02f9e85ece475ebb327e1dff3fcaa1282e3b6f748edb2\"" Apr 13 19:24:14.174026 containerd[2133]: time="2026-04-13T19:24:14.173942879Z" level=info msg="CreateContainer within sandbox \"c4e217f373e7b751203f7150f13bb8ef0e01264c66d70a5f46a53e331661f049\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:24:14.179499 containerd[2133]: time="2026-04-13T19:24:14.179126903Z" level=info msg="CreateContainer within sandbox \"440ca907bbd6d2e8832d1e4bc70814177619f6d3a1e84d9dd7efa6f609d95355\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:24:14.211451 containerd[2133]: time="2026-04-13T19:24:14.211261259Z" level=info msg="CreateContainer within sandbox \"c4e217f373e7b751203f7150f13bb8ef0e01264c66d70a5f46a53e331661f049\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"496e8a912a9d936c683d528876e276f4fa7dd79613bbbddb996f089c59eb0a00\"" Apr 13 19:24:14.213737 containerd[2133]: time="2026-04-13T19:24:14.213229511Z" level=info msg="StartContainer for \"496e8a912a9d936c683d528876e276f4fa7dd79613bbbddb996f089c59eb0a00\"" Apr 13 19:24:14.217037 kubelet[3042]: I0413 19:24:14.216954 3042 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-195" Apr 13 19:24:14.220167 kubelet[3042]: E0413 19:24:14.217884 3042 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.195:6443/api/v1/nodes\": dial tcp 172.31.26.195:6443: connect: connection refused" node="ip-172-31-26-195" Apr 13 19:24:14.239034 containerd[2133]: time="2026-04-13T19:24:14.238897691Z" level=info msg="CreateContainer within sandbox \"440ca907bbd6d2e8832d1e4bc70814177619f6d3a1e84d9dd7efa6f609d95355\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6229e49ff8c4961ac14cf6fa21699f1411f4f1cab92ef687310b01dd8de9db1c\"" Apr 13 19:24:14.240728 containerd[2133]: time="2026-04-13T19:24:14.240644195Z" level=info msg="StartContainer for \"6229e49ff8c4961ac14cf6fa21699f1411f4f1cab92ef687310b01dd8de9db1c\"" Apr 13 19:24:14.412142 containerd[2133]: time="2026-04-13T19:24:14.411998904Z" level=info msg="StartContainer for \"c161f79b196d662ef7e02f9e85ece475ebb327e1dff3fcaa1282e3b6f748edb2\" returns successfully" Apr 13 19:24:14.501346 containerd[2133]: time="2026-04-13T19:24:14.501179316Z" level=info msg="StartContainer for \"496e8a912a9d936c683d528876e276f4fa7dd79613bbbddb996f089c59eb0a00\" returns successfully" Apr 13 19:24:14.503916 containerd[2133]: time="2026-04-13T19:24:14.501898488Z" level=info msg="StartContainer for \"6229e49ff8c4961ac14cf6fa21699f1411f4f1cab92ef687310b01dd8de9db1c\" returns successfully" Apr 13 19:24:14.646954 kubelet[3042]: E0413 19:24:14.646875 3042 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.195:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.195:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:14.706644 kubelet[3042]: E0413 19:24:14.706560 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:14.721144 kubelet[3042]: E0413 19:24:14.721070 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:14.721507 kubelet[3042]: E0413 19:24:14.721471 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:14.911838 update_engine[2114]: I20260413 19:24:14.911738 2114 update_attempter.cc:509] Updating boot flags... Apr 13 19:24:15.082863 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3333) Apr 13 19:24:15.733783 kubelet[3042]: E0413 19:24:15.733329 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:15.739818 kubelet[3042]: E0413 19:24:15.735876 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:15.823651 kubelet[3042]: I0413 19:24:15.823603 3042 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-195" Apr 13 19:24:15.876819 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3339) Apr 13 19:24:18.499481 kubelet[3042]: E0413 19:24:18.499282 3042 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:19.172958 kubelet[3042]: E0413 19:24:19.172913 3042 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-195\" not found" node="ip-172-31-26-195" Apr 13 19:24:19.198072 kubelet[3042]: I0413 19:24:19.198008 3042 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-195" Apr 13 19:24:19.198072 kubelet[3042]: E0413 19:24:19.198075 3042 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-195\": node \"ip-172-31-26-195\" not found" Apr 13 19:24:19.297871 kubelet[3042]: I0413 19:24:19.297788 3042 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:19.352859 kubelet[3042]: E0413 19:24:19.352637 3042 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-195\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:19.352859 kubelet[3042]: I0413 19:24:19.352723 3042 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:19.366248 kubelet[3042]: E0413 19:24:19.365373 3042 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-195\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:19.366248 kubelet[3042]: I0413 19:24:19.366020 3042 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-195" Apr 13 19:24:19.379360 kubelet[3042]: E0413 19:24:19.379288 3042 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-195\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-195" Apr 13 19:24:19.587003 kubelet[3042]: I0413 19:24:19.586581 3042 apiserver.go:52] "Watching apiserver" Apr 13 19:24:19.598723 kubelet[3042]: I0413 19:24:19.597773 3042 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:24:21.216313 systemd[1]: Reloading requested from client PID 3507 ('systemctl') (unit session-7.scope)... Apr 13 19:24:21.216351 systemd[1]: Reloading... Apr 13 19:24:21.391727 zram_generator::config[3550]: No configuration found. Apr 13 19:24:21.671531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:21.866222 systemd[1]: Reloading finished in 649 ms. Apr 13 19:24:21.942968 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:21.959331 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:24:21.960153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:21.969579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:22.319030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:22.339868 (kubelet)[3617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:22.438831 kubelet[3617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:22.438831 kubelet[3617]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:22.438831 kubelet[3617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:22.438831 kubelet[3617]: I0413 19:24:22.437086 3617 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:22.451925 kubelet[3617]: I0413 19:24:22.451849 3617 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:24:22.451925 kubelet[3617]: I0413 19:24:22.451912 3617 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:22.452328 kubelet[3617]: I0413 19:24:22.452285 3617 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:22.455101 kubelet[3617]: I0413 19:24:22.455051 3617 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:24:22.460277 kubelet[3617]: I0413 19:24:22.460048 3617 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:22.467074 kubelet[3617]: E0413 19:24:22.467009 3617 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:22.467074 kubelet[3617]: I0413 19:24:22.467073 3617 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:22.473186 kubelet[3617]: I0413 19:24:22.473141 3617 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:24:22.474595 kubelet[3617]: I0413 19:24:22.474218 3617 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:22.475565 kubelet[3617]: I0413 19:24:22.474312 3617 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-195","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:24:22.475565 kubelet[3617]: I0413 19:24:22.475472 3617 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:22.475565 kubelet[3617]: I0413 19:24:22.475497 3617 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:24:22.475915 kubelet[3617]: I0413 19:24:22.475588 3617 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:22.475982 kubelet[3617]: I0413 19:24:22.475943 3617 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:24:22.475982 kubelet[3617]: I0413 19:24:22.475977 3617 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:22.476073 kubelet[3617]: I0413 19:24:22.476025 3617 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:24:22.476073 kubelet[3617]: I0413 19:24:22.476055 3617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:22.484101 kubelet[3617]: I0413 19:24:22.484049 3617 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:22.486257 kubelet[3617]: I0413 19:24:22.485171 3617 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:22.490145 kubelet[3617]: I0413 19:24:22.490098 3617 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:24:22.490258 kubelet[3617]: I0413 19:24:22.490185 3617 server.go:1289] "Started kubelet" Apr 13 19:24:22.498009 kubelet[3617]: I0413 19:24:22.497896 3617 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:22.523720 kubelet[3617]: I0413 19:24:22.516112 3617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:22.535606 kubelet[3617]: I0413 19:24:22.535513 3617 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:24:22.545417 kubelet[3617]: I0413 19:24:22.545053 3617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:22.552497 kubelet[3617]: I0413 19:24:22.551466 3617 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:22.552497 kubelet[3617]: I0413 19:24:22.550003 3617 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:24:22.555754 kubelet[3617]: I0413 19:24:22.546896 3617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:22.556080 kubelet[3617]: I0413 19:24:22.550024 3617 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:24:22.558655 kubelet[3617]: I0413 19:24:22.558100 3617 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:24:22.558655 kubelet[3617]: E0413 19:24:22.550262 3617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-195\" not found" Apr 13 19:24:22.577968 kubelet[3617]: I0413 19:24:22.572489 3617 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:22.602422 kubelet[3617]: I0413 19:24:22.602374 3617 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:22.608966 kubelet[3617]: I0413 19:24:22.608919 3617 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:22.618666 kubelet[3617]: E0413 19:24:22.618617 3617 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:22.630027 kubelet[3617]: I0413 19:24:22.628813 3617 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:22.647176 kubelet[3617]: I0413 19:24:22.646060 3617 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:22.647176 kubelet[3617]: I0413 19:24:22.646114 3617 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:24:22.647176 kubelet[3617]: I0413 19:24:22.646145 3617 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:22.647176 kubelet[3617]: I0413 19:24:22.646159 3617 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:24:22.647176 kubelet[3617]: E0413 19:24:22.646238 3617 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:22.746735 kubelet[3617]: E0413 19:24:22.746654 3617 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 19:24:22.775031 kubelet[3617]: I0413 19:24:22.774966 3617 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:22.775031 kubelet[3617]: I0413 19:24:22.775000 3617 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:22.775844 kubelet[3617]: I0413 19:24:22.775066 3617 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:22.775844 kubelet[3617]: I0413 19:24:22.775307 3617 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:24:22.775844 kubelet[3617]: I0413 19:24:22.775328 3617 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:24:22.775844 kubelet[3617]: I0413 19:24:22.775361 3617 policy_none.go:49] "None policy: Start" Apr 13 19:24:22.775844 kubelet[3617]: I0413 19:24:22.775380 3617 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:24:22.775844 kubelet[3617]: I0413 19:24:22.775400 3617 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:24:22.775844 kubelet[3617]: I0413 19:24:22.775556 3617 state_mem.go:75] "Updated machine memory state" Apr 13 19:24:22.778529 kubelet[3617]: E0413 19:24:22.778475 3617 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:22.778842 kubelet[3617]: I0413 19:24:22.778801 3617 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:22.778976 kubelet[3617]: I0413 19:24:22.778834 3617 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:22.781043 kubelet[3617]: I0413 19:24:22.781012 3617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:22.786982 kubelet[3617]: E0413 19:24:22.786421 3617 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:22.904836 kubelet[3617]: I0413 19:24:22.904345 3617 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-195" Apr 13 19:24:22.924670 kubelet[3617]: I0413 19:24:22.923323 3617 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-195" Apr 13 19:24:22.924670 kubelet[3617]: I0413 19:24:22.923440 3617 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-195" Apr 13 19:24:22.948640 kubelet[3617]: I0413 19:24:22.948575 3617 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-195" Apr 13 19:24:22.950952 kubelet[3617]: I0413 19:24:22.950254 3617 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:22.951077 kubelet[3617]: I0413 19:24:22.951055 3617 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:22.962733 kubelet[3617]: I0413 19:24:22.962315 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59db50c4565dd218f53839f72c209247-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-195\" (UID: \"59db50c4565dd218f53839f72c209247\") " pod="kube-system/kube-scheduler-ip-172-31-26-195" Apr 13 19:24:22.962866 kubelet[3617]: I0413 19:24:22.962816 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf3cb19941c8508ad79c22ddebd04277-ca-certs\") pod \"kube-apiserver-ip-172-31-26-195\" (UID: \"cf3cb19941c8508ad79c22ddebd04277\") " pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:22.963256 kubelet[3617]: I0413 19:24:22.963006 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf3cb19941c8508ad79c22ddebd04277-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-195\" (UID: \"cf3cb19941c8508ad79c22ddebd04277\") " pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:22.963256 kubelet[3617]: I0413 19:24:22.963090 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf3cb19941c8508ad79c22ddebd04277-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-195\" (UID: \"cf3cb19941c8508ad79c22ddebd04277\") " pod="kube-system/kube-apiserver-ip-172-31-26-195" Apr 13 19:24:22.963256 kubelet[3617]: I0413 19:24:22.963135 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:22.964580 kubelet[3617]: I0413 19:24:22.963326 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:22.964580 kubelet[3617]: I0413 19:24:22.964411 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:22.964580 kubelet[3617]: I0413 19:24:22.964516 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:22.964580 kubelet[3617]: I0413 19:24:22.964580 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/605401f7cde1a970aa588cff29b27ba8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-195\" (UID: \"605401f7cde1a970aa588cff29b27ba8\") " pod="kube-system/kube-controller-manager-ip-172-31-26-195" Apr 13 19:24:23.480413 kubelet[3617]: I0413 19:24:23.480284 3617 apiserver.go:52] "Watching apiserver" Apr 13 19:24:23.556531 kubelet[3617]: I0413 19:24:23.556462 3617 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:24:23.687503 kubelet[3617]: I0413 19:24:23.687354 3617 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-195" Apr 13 19:24:23.698816 kubelet[3617]: E0413 19:24:23.698613 3617 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-195\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-195" Apr 13 19:24:23.728584 kubelet[3617]: I0413 19:24:23.727262 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-195" podStartSLOduration=1.727244674 podStartE2EDuration="1.727244674s" podCreationTimestamp="2026-04-13 19:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:23.726855454 +0000 UTC m=+1.372926428" watchObservedRunningTime="2026-04-13 19:24:23.727244674 +0000 UTC m=+1.373315648" Apr 13 19:24:23.771210 kubelet[3617]: I0413 19:24:23.770858 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-195" podStartSLOduration=1.770838262 podStartE2EDuration="1.770838262s" podCreationTimestamp="2026-04-13 19:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:23.752368282 +0000 UTC m=+1.398439256" watchObservedRunningTime="2026-04-13 19:24:23.770838262 +0000 UTC m=+1.416909248" Apr 13 19:24:23.803414 kubelet[3617]: I0413 19:24:23.803296 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-195" podStartSLOduration=1.803274623 podStartE2EDuration="1.803274623s" podCreationTimestamp="2026-04-13 19:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:23.77094535 +0000 UTC m=+1.417016312" watchObservedRunningTime="2026-04-13 19:24:23.803274623 +0000 UTC m=+1.449345609" Apr 13 19:24:27.869218 kubelet[3617]: I0413 19:24:27.868997 3617 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:24:27.870909 containerd[2133]: time="2026-04-13T19:24:27.870647571Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:24:27.872743 kubelet[3617]: I0413 19:24:27.872550 3617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:24:28.599710 kubelet[3617]: I0413 19:24:28.599433 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b73f416f-ebcb-4f17-b548-fe03e40c986d-xtables-lock\") pod \"kube-proxy-4bnw2\" (UID: \"b73f416f-ebcb-4f17-b548-fe03e40c986d\") " pod="kube-system/kube-proxy-4bnw2" Apr 13 19:24:28.599710 kubelet[3617]: I0413 19:24:28.599507 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b73f416f-ebcb-4f17-b548-fe03e40c986d-lib-modules\") pod \"kube-proxy-4bnw2\" (UID: \"b73f416f-ebcb-4f17-b548-fe03e40c986d\") " pod="kube-system/kube-proxy-4bnw2" Apr 13 19:24:28.599710 kubelet[3617]: I0413 19:24:28.599544 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mxnr\" (UniqueName: \"kubernetes.io/projected/b73f416f-ebcb-4f17-b548-fe03e40c986d-kube-api-access-7mxnr\") pod \"kube-proxy-4bnw2\" (UID: \"b73f416f-ebcb-4f17-b548-fe03e40c986d\") " pod="kube-system/kube-proxy-4bnw2" Apr 13 19:24:28.599710 kubelet[3617]: I0413 19:24:28.599586 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b73f416f-ebcb-4f17-b548-fe03e40c986d-kube-proxy\") pod \"kube-proxy-4bnw2\" (UID: \"b73f416f-ebcb-4f17-b548-fe03e40c986d\") " pod="kube-system/kube-proxy-4bnw2" Apr 13 19:24:28.884276 containerd[2133]: time="2026-04-13T19:24:28.882654532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bnw2,Uid:b73f416f-ebcb-4f17-b548-fe03e40c986d,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:28.951036 containerd[2133]: time="2026-04-13T19:24:28.950484784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:28.951036 containerd[2133]: time="2026-04-13T19:24:28.950597380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:28.951036 containerd[2133]: time="2026-04-13T19:24:28.950661904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.951036 containerd[2133]: time="2026-04-13T19:24:28.950929780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.997042 systemd[1]: run-containerd-runc-k8s.io-c925102f702a9be087115bc444dbf42c1fd43362804ef5954f68f88d6d0fe7be-runc.Y4kFQR.mount: Deactivated successfully. Apr 13 19:24:29.040774 containerd[2133]: time="2026-04-13T19:24:29.040657225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bnw2,Uid:b73f416f-ebcb-4f17-b548-fe03e40c986d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c925102f702a9be087115bc444dbf42c1fd43362804ef5954f68f88d6d0fe7be\"" Apr 13 19:24:29.059893 containerd[2133]: time="2026-04-13T19:24:29.059012653Z" level=info msg="CreateContainer within sandbox \"c925102f702a9be087115bc444dbf42c1fd43362804ef5954f68f88d6d0fe7be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:24:29.110444 containerd[2133]: time="2026-04-13T19:24:29.110386693Z" level=info msg="CreateContainer within sandbox \"c925102f702a9be087115bc444dbf42c1fd43362804ef5954f68f88d6d0fe7be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"289ebd937dc6f951494cf4a8872cb25685d0ed0efa67d86c96443632fe8f2f54\"" Apr 13 19:24:29.117270 containerd[2133]: time="2026-04-13T19:24:29.112644577Z" level=info msg="StartContainer for \"289ebd937dc6f951494cf4a8872cb25685d0ed0efa67d86c96443632fe8f2f54\"" Apr 13 19:24:29.203379 kubelet[3617]: I0413 19:24:29.203231 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzcr7\" (UniqueName: \"kubernetes.io/projected/d7096666-0f43-44ac-b9df-3de4b481f6d7-kube-api-access-zzcr7\") pod \"tigera-operator-6bf85f8dd-54v78\" (UID: \"d7096666-0f43-44ac-b9df-3de4b481f6d7\") " pod="tigera-operator/tigera-operator-6bf85f8dd-54v78" Apr 13 19:24:29.203379 kubelet[3617]: I0413 19:24:29.203308 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7096666-0f43-44ac-b9df-3de4b481f6d7-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-54v78\" (UID: \"d7096666-0f43-44ac-b9df-3de4b481f6d7\") " pod="tigera-operator/tigera-operator-6bf85f8dd-54v78" Apr 13 19:24:29.267287 containerd[2133]: time="2026-04-13T19:24:29.267085742Z" level=info msg="StartContainer for \"289ebd937dc6f951494cf4a8872cb25685d0ed0efa67d86c96443632fe8f2f54\" returns successfully" Apr 13 19:24:29.474792 containerd[2133]: time="2026-04-13T19:24:29.474215667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-54v78,Uid:d7096666-0f43-44ac-b9df-3de4b481f6d7,Namespace:tigera-operator,Attempt:0,}" Apr 13 19:24:29.527700 containerd[2133]: time="2026-04-13T19:24:29.527219163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:29.527700 containerd[2133]: time="2026-04-13T19:24:29.527326719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:29.527700 containerd[2133]: time="2026-04-13T19:24:29.527363679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:29.527700 containerd[2133]: time="2026-04-13T19:24:29.527528355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:29.655280 containerd[2133]: time="2026-04-13T19:24:29.655200304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-54v78,Uid:d7096666-0f43-44ac-b9df-3de4b481f6d7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"200af34ba52cb16ba002aba47997b960cc1bda22d40ee172674e8f3cccd5a62e\"" Apr 13 19:24:29.663438 containerd[2133]: time="2026-04-13T19:24:29.663367168Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 19:24:29.723340 kubelet[3617]: I0413 19:24:29.722048 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4bnw2" podStartSLOduration=1.722028976 podStartE2EDuration="1.722028976s" podCreationTimestamp="2026-04-13 19:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:29.72081844 +0000 UTC m=+7.366889426" watchObservedRunningTime="2026-04-13 19:24:29.722028976 +0000 UTC m=+7.368099950" Apr 13 19:24:30.964671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121793218.mount: Deactivated successfully. Apr 13 19:24:32.482567 containerd[2133]: time="2026-04-13T19:24:32.481805454Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.483952 containerd[2133]: time="2026-04-13T19:24:32.483557958Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 19:24:32.486178 containerd[2133]: time="2026-04-13T19:24:32.485794698Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.492213 containerd[2133]: time="2026-04-13T19:24:32.492161190Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.493952 containerd[2133]: time="2026-04-13T19:24:32.493742118Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.829750134s" Apr 13 19:24:32.493952 containerd[2133]: time="2026-04-13T19:24:32.493801998Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 19:24:32.503213 containerd[2133]: time="2026-04-13T19:24:32.503149014Z" level=info msg="CreateContainer within sandbox \"200af34ba52cb16ba002aba47997b960cc1bda22d40ee172674e8f3cccd5a62e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 19:24:32.526505 containerd[2133]: time="2026-04-13T19:24:32.526422786Z" level=info msg="CreateContainer within sandbox \"200af34ba52cb16ba002aba47997b960cc1bda22d40ee172674e8f3cccd5a62e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210\"" Apr 13 19:24:32.527174 containerd[2133]: time="2026-04-13T19:24:32.527132202Z" level=info msg="StartContainer for \"d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210\"" Apr 13 19:24:32.627972 containerd[2133]: time="2026-04-13T19:24:32.627778158Z" level=info msg="StartContainer for \"d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210\" returns successfully" Apr 13 19:24:39.606081 sudo[2506]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:39.779732 sshd[2502]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:39.799180 systemd[1]: sshd@6-172.31.26.195:22-4.175.71.9:47138.service: Deactivated successfully. Apr 13 19:24:39.811138 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:24:39.816089 systemd-logind[2104]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:24:39.824111 systemd-logind[2104]: Removed session 7. Apr 13 19:24:50.756955 kubelet[3617]: I0413 19:24:50.756832 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-54v78" podStartSLOduration=18.920953054 podStartE2EDuration="21.756788244s" podCreationTimestamp="2026-04-13 19:24:29 +0000 UTC" firstStartedPulling="2026-04-13 19:24:29.660139864 +0000 UTC m=+7.306210838" lastFinishedPulling="2026-04-13 19:24:32.495975054 +0000 UTC m=+10.142046028" observedRunningTime="2026-04-13 19:24:32.729172747 +0000 UTC m=+10.375243721" watchObservedRunningTime="2026-04-13 19:24:50.756788244 +0000 UTC m=+28.402859254" Apr 13 19:24:50.852031 kubelet[3617]: I0413 19:24:50.851973 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24664864-fd34-4744-ab70-822dd20e813c-tigera-ca-bundle\") pod \"calico-typha-cbb689c47-jfsbf\" (UID: \"24664864-fd34-4744-ab70-822dd20e813c\") " pod="calico-system/calico-typha-cbb689c47-jfsbf" Apr 13 19:24:50.853707 kubelet[3617]: I0413 19:24:50.852891 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/24664864-fd34-4744-ab70-822dd20e813c-typha-certs\") pod \"calico-typha-cbb689c47-jfsbf\" (UID: \"24664864-fd34-4744-ab70-822dd20e813c\") " pod="calico-system/calico-typha-cbb689c47-jfsbf" Apr 13 19:24:50.853707 kubelet[3617]: I0413 19:24:50.853357 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkqx\" (UniqueName: \"kubernetes.io/projected/24664864-fd34-4744-ab70-822dd20e813c-kube-api-access-8pkqx\") pod \"calico-typha-cbb689c47-jfsbf\" (UID: \"24664864-fd34-4744-ab70-822dd20e813c\") " pod="calico-system/calico-typha-cbb689c47-jfsbf" Apr 13 19:24:51.056875 kubelet[3617]: I0413 19:24:51.056184 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5b743f83-e208-464e-9b5d-a7002a569f18-node-certs\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059106 kubelet[3617]: I0413 19:24:51.057128 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-var-run-calico\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059106 kubelet[3617]: I0413 19:24:51.058276 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-lib-modules\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059106 kubelet[3617]: I0413 19:24:51.058344 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-xtables-lock\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059106 kubelet[3617]: I0413 19:24:51.058379 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-policysync\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059106 kubelet[3617]: I0413 19:24:51.058424 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-bpffs\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059596 kubelet[3617]: I0413 19:24:51.058464 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-cni-bin-dir\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059596 kubelet[3617]: I0413 19:24:51.058500 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-var-lib-calico\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059596 kubelet[3617]: I0413 19:24:51.058539 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-nodeproc\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059596 kubelet[3617]: I0413 19:24:51.058577 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtmgk\" (UniqueName: \"kubernetes.io/projected/5b743f83-e208-464e-9b5d-a7002a569f18-kube-api-access-mtmgk\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.059596 kubelet[3617]: I0413 19:24:51.058616 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-cni-log-dir\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.061190 kubelet[3617]: I0413 19:24:51.058649 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-sys-fs\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.062966 kubelet[3617]: I0413 19:24:51.061520 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-cni-net-dir\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.064279 kubelet[3617]: I0413 19:24:51.064004 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5b743f83-e208-464e-9b5d-a7002a569f18-flexvol-driver-host\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.065264 kubelet[3617]: I0413 19:24:51.064572 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b743f83-e208-464e-9b5d-a7002a569f18-tigera-ca-bundle\") pod \"calico-node-zpmxb\" (UID: \"5b743f83-e208-464e-9b5d-a7002a569f18\") " pod="calico-system/calico-node-zpmxb" Apr 13 19:24:51.077454 containerd[2133]: time="2026-04-13T19:24:51.077220430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cbb689c47-jfsbf,Uid:24664864-fd34-4744-ab70-822dd20e813c,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:51.121954 kubelet[3617]: E0413 19:24:51.118701 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:24:51.166162 kubelet[3617]: I0413 19:24:51.166092 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d14c8f4d-16d8-4d7e-83af-5e5a012516fe-varrun\") pod \"csi-node-driver-4pjcq\" (UID: \"d14c8f4d-16d8-4d7e-83af-5e5a012516fe\") " pod="calico-system/csi-node-driver-4pjcq" Apr 13 19:24:51.166351 kubelet[3617]: I0413 19:24:51.166251 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d14c8f4d-16d8-4d7e-83af-5e5a012516fe-registration-dir\") pod \"csi-node-driver-4pjcq\" (UID: \"d14c8f4d-16d8-4d7e-83af-5e5a012516fe\") " pod="calico-system/csi-node-driver-4pjcq" Apr 13 19:24:51.166351 kubelet[3617]: I0413 19:24:51.166293 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fck2h\" (UniqueName: \"kubernetes.io/projected/d14c8f4d-16d8-4d7e-83af-5e5a012516fe-kube-api-access-fck2h\") pod \"csi-node-driver-4pjcq\" (UID: \"d14c8f4d-16d8-4d7e-83af-5e5a012516fe\") " pod="calico-system/csi-node-driver-4pjcq" Apr 13 19:24:51.166457 kubelet[3617]: I0413 19:24:51.166378 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d14c8f4d-16d8-4d7e-83af-5e5a012516fe-socket-dir\") pod \"csi-node-driver-4pjcq\" (UID: \"d14c8f4d-16d8-4d7e-83af-5e5a012516fe\") " pod="calico-system/csi-node-driver-4pjcq" Apr 13 19:24:51.167722 kubelet[3617]: I0413 19:24:51.166538 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d14c8f4d-16d8-4d7e-83af-5e5a012516fe-kubelet-dir\") pod \"csi-node-driver-4pjcq\" (UID: \"d14c8f4d-16d8-4d7e-83af-5e5a012516fe\") " pod="calico-system/csi-node-driver-4pjcq" Apr 13 19:24:51.185737 kubelet[3617]: E0413 19:24:51.184066 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.185737 kubelet[3617]: W0413 19:24:51.184119 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.185737 kubelet[3617]: E0413 19:24:51.184163 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.198957 kubelet[3617]: E0413 19:24:51.195902 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.198957 kubelet[3617]: W0413 19:24:51.195948 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.198957 kubelet[3617]: E0413 19:24:51.195986 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.203801 containerd[2133]: time="2026-04-13T19:24:51.194504951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:51.203801 containerd[2133]: time="2026-04-13T19:24:51.194610443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:51.203801 containerd[2133]: time="2026-04-13T19:24:51.194667971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.203801 containerd[2133]: time="2026-04-13T19:24:51.199905767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.207953 kubelet[3617]: E0413 19:24:51.207141 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.207953 kubelet[3617]: W0413 19:24:51.207307 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.207953 kubelet[3617]: E0413 19:24:51.207572 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.210325 kubelet[3617]: E0413 19:24:51.209248 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.210325 kubelet[3617]: W0413 19:24:51.209298 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.210325 kubelet[3617]: E0413 19:24:51.209333 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.212756 kubelet[3617]: E0413 19:24:51.212334 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.212756 kubelet[3617]: W0413 19:24:51.212391 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.212756 kubelet[3617]: E0413 19:24:51.212441 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.217754 kubelet[3617]: E0413 19:24:51.216863 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.217754 kubelet[3617]: W0413 19:24:51.216907 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.217754 kubelet[3617]: E0413 19:24:51.216944 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.217754 kubelet[3617]: E0413 19:24:51.217431 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.217754 kubelet[3617]: W0413 19:24:51.217449 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.217754 kubelet[3617]: E0413 19:24:51.217471 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.218250 kubelet[3617]: E0413 19:24:51.218150 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.218250 kubelet[3617]: W0413 19:24:51.218171 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.218250 kubelet[3617]: E0413 19:24:51.218199 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.225928 kubelet[3617]: E0413 19:24:51.222499 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.225928 kubelet[3617]: W0413 19:24:51.222891 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.225928 kubelet[3617]: E0413 19:24:51.222933 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.238163 kubelet[3617]: E0413 19:24:51.238104 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.238163 kubelet[3617]: W0413 19:24:51.238148 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.238382 kubelet[3617]: E0413 19:24:51.238185 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.246732 kubelet[3617]: E0413 19:24:51.245656 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.246732 kubelet[3617]: W0413 19:24:51.245724 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.246732 kubelet[3617]: E0413 19:24:51.245763 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.248757 kubelet[3617]: E0413 19:24:51.247184 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.248757 kubelet[3617]: W0413 19:24:51.247227 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.248757 kubelet[3617]: E0413 19:24:51.247291 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.249036 kubelet[3617]: E0413 19:24:51.248892 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.249036 kubelet[3617]: W0413 19:24:51.248923 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.249036 kubelet[3617]: E0413 19:24:51.248954 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.256007 kubelet[3617]: E0413 19:24:51.254129 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.256007 kubelet[3617]: W0413 19:24:51.254179 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.256007 kubelet[3617]: E0413 19:24:51.254218 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.264367 kubelet[3617]: E0413 19:24:51.262414 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.264367 kubelet[3617]: W0413 19:24:51.264239 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.265079 kubelet[3617]: E0413 19:24:51.264289 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.272153 kubelet[3617]: E0413 19:24:51.271900 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.272153 kubelet[3617]: W0413 19:24:51.271949 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.272153 kubelet[3617]: E0413 19:24:51.271986 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.287135 kubelet[3617]: E0413 19:24:51.284341 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.287135 kubelet[3617]: W0413 19:24:51.284379 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.287135 kubelet[3617]: E0413 19:24:51.284413 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.291719 kubelet[3617]: E0413 19:24:51.289843 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.291719 kubelet[3617]: W0413 19:24:51.289884 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.291719 kubelet[3617]: E0413 19:24:51.289920 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.295997 kubelet[3617]: E0413 19:24:51.295496 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.295997 kubelet[3617]: W0413 19:24:51.295540 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.295997 kubelet[3617]: E0413 19:24:51.295577 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.303714 kubelet[3617]: E0413 19:24:51.303233 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.303714 kubelet[3617]: W0413 19:24:51.303280 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.303714 kubelet[3617]: E0413 19:24:51.303315 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.309373 kubelet[3617]: E0413 19:24:51.308665 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.309373 kubelet[3617]: W0413 19:24:51.308754 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.309373 kubelet[3617]: E0413 19:24:51.308800 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.317468 kubelet[3617]: E0413 19:24:51.317422 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.317656 kubelet[3617]: W0413 19:24:51.317468 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.317656 kubelet[3617]: E0413 19:24:51.317530 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.322348 kubelet[3617]: E0413 19:24:51.322166 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.322348 kubelet[3617]: W0413 19:24:51.322207 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.322348 kubelet[3617]: E0413 19:24:51.322245 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.327725 kubelet[3617]: E0413 19:24:51.325831 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.327725 kubelet[3617]: W0413 19:24:51.325874 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.327725 kubelet[3617]: E0413 19:24:51.325911 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.327725 kubelet[3617]: E0413 19:24:51.326731 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.327725 kubelet[3617]: W0413 19:24:51.326773 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.327725 kubelet[3617]: E0413 19:24:51.326806 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.331726 kubelet[3617]: E0413 19:24:51.328292 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.331726 kubelet[3617]: W0413 19:24:51.328335 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.331726 kubelet[3617]: E0413 19:24:51.328672 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.331726 kubelet[3617]: E0413 19:24:51.329415 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.331726 kubelet[3617]: W0413 19:24:51.329456 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.331726 kubelet[3617]: E0413 19:24:51.329486 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.331726 kubelet[3617]: E0413 19:24:51.329913 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.331726 kubelet[3617]: W0413 19:24:51.329932 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.331726 kubelet[3617]: E0413 19:24:51.329953 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.331726 kubelet[3617]: E0413 19:24:51.330466 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.332316 kubelet[3617]: W0413 19:24:51.330484 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.332316 kubelet[3617]: E0413 19:24:51.330537 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.332316 kubelet[3617]: E0413 19:24:51.331512 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.332316 kubelet[3617]: W0413 19:24:51.331552 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.332316 kubelet[3617]: E0413 19:24:51.331579 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.332316 kubelet[3617]: E0413 19:24:51.332128 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.332316 kubelet[3617]: W0413 19:24:51.332174 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.332316 kubelet[3617]: E0413 19:24:51.332198 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.336804 kubelet[3617]: E0413 19:24:51.332623 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.336804 kubelet[3617]: W0413 19:24:51.332641 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.336804 kubelet[3617]: E0413 19:24:51.332660 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.336804 kubelet[3617]: E0413 19:24:51.333007 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.336804 kubelet[3617]: W0413 19:24:51.333026 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.336804 kubelet[3617]: E0413 19:24:51.333048 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.336804 kubelet[3617]: E0413 19:24:51.333420 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.336804 kubelet[3617]: W0413 19:24:51.333438 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.336804 kubelet[3617]: E0413 19:24:51.333490 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.336804 kubelet[3617]: E0413 19:24:51.333844 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.338315 kubelet[3617]: W0413 19:24:51.333862 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.338315 kubelet[3617]: E0413 19:24:51.333883 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.338315 kubelet[3617]: E0413 19:24:51.334249 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.338315 kubelet[3617]: W0413 19:24:51.334266 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.338315 kubelet[3617]: E0413 19:24:51.334287 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.338315 kubelet[3617]: E0413 19:24:51.334906 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.338315 kubelet[3617]: W0413 19:24:51.334932 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.338315 kubelet[3617]: E0413 19:24:51.334962 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.338315 kubelet[3617]: E0413 19:24:51.335489 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.338315 kubelet[3617]: W0413 19:24:51.335513 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.342324 kubelet[3617]: E0413 19:24:51.335585 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.342324 kubelet[3617]: E0413 19:24:51.336052 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.342324 kubelet[3617]: W0413 19:24:51.336076 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.342324 kubelet[3617]: E0413 19:24:51.336099 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.342324 kubelet[3617]: E0413 19:24:51.336522 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.342324 kubelet[3617]: W0413 19:24:51.336541 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.342324 kubelet[3617]: E0413 19:24:51.336563 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.342324 kubelet[3617]: E0413 19:24:51.336961 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.342324 kubelet[3617]: W0413 19:24:51.336984 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.342324 kubelet[3617]: E0413 19:24:51.337008 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.344269 kubelet[3617]: E0413 19:24:51.337315 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.344269 kubelet[3617]: W0413 19:24:51.337335 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.344269 kubelet[3617]: E0413 19:24:51.337357 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.344269 kubelet[3617]: E0413 19:24:51.337800 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.344269 kubelet[3617]: W0413 19:24:51.337821 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.344269 kubelet[3617]: E0413 19:24:51.337845 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.367188 kubelet[3617]: E0413 19:24:51.366899 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:51.367188 kubelet[3617]: W0413 19:24:51.366961 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:51.367188 kubelet[3617]: E0413 19:24:51.367000 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:51.443746 containerd[2133]: time="2026-04-13T19:24:51.443598060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cbb689c47-jfsbf,Uid:24664864-fd34-4744-ab70-822dd20e813c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef6a54bdcfb11c62f43f7539fc0f1f29933f5b7aa11c5a78128247659faa182e\"" Apr 13 19:24:51.453025 containerd[2133]: time="2026-04-13T19:24:51.452107704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 19:24:51.588014 containerd[2133]: time="2026-04-13T19:24:51.587212093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpmxb,Uid:5b743f83-e208-464e-9b5d-a7002a569f18,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:51.635062 containerd[2133]: time="2026-04-13T19:24:51.634749373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:51.635062 containerd[2133]: time="2026-04-13T19:24:51.634850101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:51.635062 containerd[2133]: time="2026-04-13T19:24:51.634889749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.635486 containerd[2133]: time="2026-04-13T19:24:51.635088745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.717176 containerd[2133]: time="2026-04-13T19:24:51.716668417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpmxb,Uid:5b743f83-e208-464e-9b5d-a7002a569f18,Namespace:calico-system,Attempt:0,} returns sandbox id \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\"" Apr 13 19:24:52.648762 kubelet[3617]: E0413 19:24:52.647790 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:24:52.906828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130810264.mount: Deactivated successfully. Apr 13 19:24:53.901040 containerd[2133]: time="2026-04-13T19:24:53.900979312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:53.903579 containerd[2133]: time="2026-04-13T19:24:53.903492364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 13 19:24:53.904654 containerd[2133]: time="2026-04-13T19:24:53.904573924Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:53.910342 containerd[2133]: time="2026-04-13T19:24:53.909968620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:53.911886 containerd[2133]: time="2026-04-13T19:24:53.911810296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.459639976s" Apr 13 19:24:53.912392 containerd[2133]: time="2026-04-13T19:24:53.911882260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 19:24:53.916479 containerd[2133]: time="2026-04-13T19:24:53.915569548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 19:24:53.954500 containerd[2133]: time="2026-04-13T19:24:53.954429412Z" level=info msg="CreateContainer within sandbox \"ef6a54bdcfb11c62f43f7539fc0f1f29933f5b7aa11c5a78128247659faa182e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 19:24:53.983909 containerd[2133]: time="2026-04-13T19:24:53.983815060Z" level=info msg="CreateContainer within sandbox \"ef6a54bdcfb11c62f43f7539fc0f1f29933f5b7aa11c5a78128247659faa182e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a11453fcd27b8dc846b47b839cecb40ff143b232e8ace3c88bb6bd900f0f8f7f\"" Apr 13 19:24:53.985201 containerd[2133]: time="2026-04-13T19:24:53.985135108Z" level=info msg="StartContainer for \"a11453fcd27b8dc846b47b839cecb40ff143b232e8ace3c88bb6bd900f0f8f7f\"" Apr 13 19:24:54.114558 containerd[2133]: time="2026-04-13T19:24:54.114469597Z" level=info msg="StartContainer for \"a11453fcd27b8dc846b47b839cecb40ff143b232e8ace3c88bb6bd900f0f8f7f\" returns successfully" Apr 13 19:24:54.654932 kubelet[3617]: E0413 19:24:54.654287 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:24:54.879487 kubelet[3617]: E0413 19:24:54.878810 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.879487 kubelet[3617]: W0413 19:24:54.878860 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.879487 kubelet[3617]: E0413 19:24:54.878899 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.881289 kubelet[3617]: E0413 19:24:54.881060 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.881289 kubelet[3617]: W0413 19:24:54.881094 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.881289 kubelet[3617]: E0413 19:24:54.881171 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.884998 kubelet[3617]: E0413 19:24:54.883799 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.884998 kubelet[3617]: W0413 19:24:54.883837 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.884998 kubelet[3617]: E0413 19:24:54.883871 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.887786 kubelet[3617]: E0413 19:24:54.885805 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.887786 kubelet[3617]: W0413 19:24:54.885842 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.887786 kubelet[3617]: E0413 19:24:54.885876 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.888841 kubelet[3617]: E0413 19:24:54.888442 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.888841 kubelet[3617]: W0413 19:24:54.888474 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.888841 kubelet[3617]: E0413 19:24:54.888510 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.891778 kubelet[3617]: E0413 19:24:54.890220 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.891778 kubelet[3617]: W0413 19:24:54.890264 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.891778 kubelet[3617]: E0413 19:24:54.890311 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.897736 kubelet[3617]: E0413 19:24:54.894980 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.897736 kubelet[3617]: W0413 19:24:54.895046 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.897736 kubelet[3617]: E0413 19:24:54.895109 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.900784 kubelet[3617]: E0413 19:24:54.898490 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.900784 kubelet[3617]: W0413 19:24:54.898546 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.900784 kubelet[3617]: E0413 19:24:54.898585 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.903771 kubelet[3617]: E0413 19:24:54.902582 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.903771 kubelet[3617]: W0413 19:24:54.902636 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.903771 kubelet[3617]: E0413 19:24:54.902737 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.908868 kubelet[3617]: E0413 19:24:54.905926 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.908868 kubelet[3617]: W0413 19:24:54.905981 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.908868 kubelet[3617]: E0413 19:24:54.906019 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.911487 kubelet[3617]: E0413 19:24:54.909716 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.911487 kubelet[3617]: W0413 19:24:54.909773 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.911487 kubelet[3617]: E0413 19:24:54.910772 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.915798 kubelet[3617]: E0413 19:24:54.914800 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.915798 kubelet[3617]: W0413 19:24:54.914863 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.915798 kubelet[3617]: E0413 19:24:54.914915 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.917778 kubelet[3617]: E0413 19:24:54.917047 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.917778 kubelet[3617]: W0413 19:24:54.917086 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.917778 kubelet[3617]: E0413 19:24:54.917122 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.917778 kubelet[3617]: E0413 19:24:54.917530 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.917778 kubelet[3617]: W0413 19:24:54.917549 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.917778 kubelet[3617]: E0413 19:24:54.917573 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.919067 kubelet[3617]: E0413 19:24:54.918039 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.919067 kubelet[3617]: W0413 19:24:54.918060 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.919067 kubelet[3617]: E0413 19:24:54.918085 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.947284 kubelet[3617]: I0413 19:24:54.947137 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cbb689c47-jfsbf" podStartSLOduration=2.483443977 podStartE2EDuration="4.947108693s" podCreationTimestamp="2026-04-13 19:24:50 +0000 UTC" firstStartedPulling="2026-04-13 19:24:51.451462668 +0000 UTC m=+29.097533642" lastFinishedPulling="2026-04-13 19:24:53.915127396 +0000 UTC m=+31.561198358" observedRunningTime="2026-04-13 19:24:54.944106401 +0000 UTC m=+32.590177423" watchObservedRunningTime="2026-04-13 19:24:54.947108693 +0000 UTC m=+32.593179679" Apr 13 19:24:54.968974 kubelet[3617]: E0413 19:24:54.968910 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.968974 kubelet[3617]: W0413 19:24:54.968955 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.969254 kubelet[3617]: E0413 19:24:54.968991 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.975154 kubelet[3617]: E0413 19:24:54.974374 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.975154 kubelet[3617]: W0413 19:24:54.974456 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.975154 kubelet[3617]: E0413 19:24:54.974514 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.980756 kubelet[3617]: E0413 19:24:54.979105 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.980756 kubelet[3617]: W0413 19:24:54.979143 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.980756 kubelet[3617]: E0413 19:24:54.979179 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.986908 kubelet[3617]: E0413 19:24:54.985335 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.986908 kubelet[3617]: W0413 19:24:54.985726 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.986908 kubelet[3617]: E0413 19:24:54.985766 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.991330 kubelet[3617]: E0413 19:24:54.991141 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.992182 kubelet[3617]: W0413 19:24:54.991935 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:54.992182 kubelet[3617]: E0413 19:24:54.992180 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:54.999096 kubelet[3617]: E0413 19:24:54.997437 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:54.999096 kubelet[3617]: W0413 19:24:54.998997 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.000127 kubelet[3617]: E0413 19:24:54.999320 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.001734 kubelet[3617]: E0413 19:24:55.001481 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.001734 kubelet[3617]: W0413 19:24:55.001549 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.002488 kubelet[3617]: E0413 19:24:55.001661 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.003483 kubelet[3617]: E0413 19:24:55.003336 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.003483 kubelet[3617]: W0413 19:24:55.003387 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.003483 kubelet[3617]: E0413 19:24:55.003440 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.004926 kubelet[3617]: E0413 19:24:55.004770 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.004926 kubelet[3617]: W0413 19:24:55.004818 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.004926 kubelet[3617]: E0413 19:24:55.004874 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.006232 kubelet[3617]: E0413 19:24:55.005985 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.006232 kubelet[3617]: W0413 19:24:55.006015 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.006232 kubelet[3617]: E0413 19:24:55.006042 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.007909 kubelet[3617]: E0413 19:24:55.007382 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.007909 kubelet[3617]: W0413 19:24:55.007560 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.007909 kubelet[3617]: E0413 19:24:55.007825 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.011300 kubelet[3617]: E0413 19:24:55.010249 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.011300 kubelet[3617]: W0413 19:24:55.010283 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.011300 kubelet[3617]: E0413 19:24:55.010319 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.014054 kubelet[3617]: E0413 19:24:55.012571 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.014054 kubelet[3617]: W0413 19:24:55.012618 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.014054 kubelet[3617]: E0413 19:24:55.013576 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.018048 kubelet[3617]: E0413 19:24:55.016406 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.018048 kubelet[3617]: W0413 19:24:55.016458 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.018048 kubelet[3617]: E0413 19:24:55.016498 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.021193 kubelet[3617]: E0413 19:24:55.019763 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.021193 kubelet[3617]: W0413 19:24:55.019825 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.021193 kubelet[3617]: E0413 19:24:55.019888 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.022651 kubelet[3617]: E0413 19:24:55.022507 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.022651 kubelet[3617]: W0413 19:24:55.022582 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.025010 kubelet[3617]: E0413 19:24:55.022794 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.027342 kubelet[3617]: E0413 19:24:55.027193 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.028381 kubelet[3617]: W0413 19:24:55.027514 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.028381 kubelet[3617]: E0413 19:24:55.027558 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.031818 kubelet[3617]: E0413 19:24:55.031203 3617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:55.031818 kubelet[3617]: W0413 19:24:55.031250 3617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:55.031818 kubelet[3617]: E0413 19:24:55.031311 3617 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:55.404336 containerd[2133]: time="2026-04-13T19:24:55.402791824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:55.405178 containerd[2133]: time="2026-04-13T19:24:55.405129760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 13 19:24:55.407351 containerd[2133]: time="2026-04-13T19:24:55.407309752Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:55.413276 containerd[2133]: time="2026-04-13T19:24:55.413187808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:55.415588 containerd[2133]: time="2026-04-13T19:24:55.415513792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.499872616s" Apr 13 19:24:55.415588 containerd[2133]: time="2026-04-13T19:24:55.415585168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 19:24:55.425638 containerd[2133]: time="2026-04-13T19:24:55.425585512Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 19:24:55.457144 containerd[2133]: time="2026-04-13T19:24:55.456918328Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c55ede5a5285d4c37ef5cb79fb52c1354ca4ba92591ed485ac5d5db9aacc0eb\"" Apr 13 19:24:55.460225 containerd[2133]: time="2026-04-13T19:24:55.460035736Z" level=info msg="StartContainer for \"6c55ede5a5285d4c37ef5cb79fb52c1354ca4ba92591ed485ac5d5db9aacc0eb\"" Apr 13 19:24:55.593676 containerd[2133]: time="2026-04-13T19:24:55.593524408Z" level=info msg="StartContainer for \"6c55ede5a5285d4c37ef5cb79fb52c1354ca4ba92591ed485ac5d5db9aacc0eb\" returns successfully" Apr 13 19:24:55.843204 kubelet[3617]: I0413 19:24:55.841582 3617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:24:55.928772 systemd[1]: run-containerd-runc-k8s.io-6c55ede5a5285d4c37ef5cb79fb52c1354ca4ba92591ed485ac5d5db9aacc0eb-runc.hBv2wv.mount: Deactivated successfully. Apr 13 19:24:55.929079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c55ede5a5285d4c37ef5cb79fb52c1354ca4ba92591ed485ac5d5db9aacc0eb-rootfs.mount: Deactivated successfully. Apr 13 19:24:56.167407 containerd[2133]: time="2026-04-13T19:24:56.167088207Z" level=info msg="shim disconnected" id=6c55ede5a5285d4c37ef5cb79fb52c1354ca4ba92591ed485ac5d5db9aacc0eb namespace=k8s.io Apr 13 19:24:56.167407 containerd[2133]: time="2026-04-13T19:24:56.167200899Z" level=warning msg="cleaning up after shim disconnected" id=6c55ede5a5285d4c37ef5cb79fb52c1354ca4ba92591ed485ac5d5db9aacc0eb namespace=k8s.io Apr 13 19:24:56.167407 containerd[2133]: time="2026-04-13T19:24:56.167222943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:56.648606 kubelet[3617]: E0413 19:24:56.647546 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:24:56.850441 containerd[2133]: time="2026-04-13T19:24:56.850378015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 19:24:58.673910 kubelet[3617]: E0413 19:24:58.671079 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:25:00.647275 kubelet[3617]: E0413 19:25:00.647211 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:25:02.648755 kubelet[3617]: E0413 19:25:02.647661 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:25:03.804453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492287548.mount: Deactivated successfully. Apr 13 19:25:03.862107 containerd[2133]: time="2026-04-13T19:25:03.861991874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:03.863755 containerd[2133]: time="2026-04-13T19:25:03.863657810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 19:25:03.864728 containerd[2133]: time="2026-04-13T19:25:03.864335894Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:03.870724 containerd[2133]: time="2026-04-13T19:25:03.870348386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:03.872741 containerd[2133]: time="2026-04-13T19:25:03.871916666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.021470395s" Apr 13 19:25:03.872741 containerd[2133]: time="2026-04-13T19:25:03.871979462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 19:25:03.879934 containerd[2133]: time="2026-04-13T19:25:03.879859274Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 19:25:03.902761 containerd[2133]: time="2026-04-13T19:25:03.902566850Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b22da3117863bbb880555b065dd85e0d32c9b39079bcf149d7a90cbe55ec1f6d\"" Apr 13 19:25:03.905641 containerd[2133]: time="2026-04-13T19:25:03.905030642Z" level=info msg="StartContainer for \"b22da3117863bbb880555b065dd85e0d32c9b39079bcf149d7a90cbe55ec1f6d\"" Apr 13 19:25:04.023039 containerd[2133]: time="2026-04-13T19:25:04.022965670Z" level=info msg="StartContainer for \"b22da3117863bbb880555b065dd85e0d32c9b39079bcf149d7a90cbe55ec1f6d\" returns successfully" Apr 13 19:25:04.647676 kubelet[3617]: E0413 19:25:04.647115 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:25:04.803589 containerd[2133]: time="2026-04-13T19:25:04.801469814Z" level=info msg="shim disconnected" id=b22da3117863bbb880555b065dd85e0d32c9b39079bcf149d7a90cbe55ec1f6d namespace=k8s.io Apr 13 19:25:04.803589 containerd[2133]: time="2026-04-13T19:25:04.801548426Z" level=warning msg="cleaning up after shim disconnected" id=b22da3117863bbb880555b065dd85e0d32c9b39079bcf149d7a90cbe55ec1f6d namespace=k8s.io Apr 13 19:25:04.803589 containerd[2133]: time="2026-04-13T19:25:04.801569090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:04.808660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b22da3117863bbb880555b065dd85e0d32c9b39079bcf149d7a90cbe55ec1f6d-rootfs.mount: Deactivated successfully. Apr 13 19:25:04.832445 containerd[2133]: time="2026-04-13T19:25:04.832044962Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:25:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:25:04.881036 containerd[2133]: time="2026-04-13T19:25:04.880002759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 19:25:06.647142 kubelet[3617]: E0413 19:25:06.647083 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:25:08.092602 containerd[2133]: time="2026-04-13T19:25:08.092512875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:08.094712 containerd[2133]: time="2026-04-13T19:25:08.094640667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 19:25:08.098181 containerd[2133]: time="2026-04-13T19:25:08.096764643Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:08.104394 containerd[2133]: time="2026-04-13T19:25:08.103465107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:08.106398 containerd[2133]: time="2026-04-13T19:25:08.106338519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.226271452s" Apr 13 19:25:08.106590 containerd[2133]: time="2026-04-13T19:25:08.106559355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 19:25:08.121518 containerd[2133]: time="2026-04-13T19:25:08.121444455Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 19:25:08.150218 containerd[2133]: time="2026-04-13T19:25:08.150117279Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0642db71cb2c54e07811f98eacff1fe6a702b4c66c249189122832707dd1624a\"" Apr 13 19:25:08.152163 containerd[2133]: time="2026-04-13T19:25:08.151843983Z" level=info msg="StartContainer for \"0642db71cb2c54e07811f98eacff1fe6a702b4c66c249189122832707dd1624a\"" Apr 13 19:25:08.274939 containerd[2133]: time="2026-04-13T19:25:08.274874523Z" level=info msg="StartContainer for \"0642db71cb2c54e07811f98eacff1fe6a702b4c66c249189122832707dd1624a\" returns successfully" Apr 13 19:25:08.649718 kubelet[3617]: E0413 19:25:08.649452 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:25:10.078908 containerd[2133]: time="2026-04-13T19:25:10.078843388Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:25:10.130598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0642db71cb2c54e07811f98eacff1fe6a702b4c66c249189122832707dd1624a-rootfs.mount: Deactivated successfully. Apr 13 19:25:10.131546 kubelet[3617]: I0413 19:25:10.131033 3617 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 19:25:10.140232 containerd[2133]: time="2026-04-13T19:25:10.140134385Z" level=info msg="shim disconnected" id=0642db71cb2c54e07811f98eacff1fe6a702b4c66c249189122832707dd1624a namespace=k8s.io Apr 13 19:25:10.140414 containerd[2133]: time="2026-04-13T19:25:10.140231345Z" level=warning msg="cleaning up after shim disconnected" id=0642db71cb2c54e07811f98eacff1fe6a702b4c66c249189122832707dd1624a namespace=k8s.io Apr 13 19:25:10.140414 containerd[2133]: time="2026-04-13T19:25:10.140253785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:10.215142 containerd[2133]: time="2026-04-13T19:25:10.214928261Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:25:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:25:10.317550 kubelet[3617]: I0413 19:25:10.317318 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-backend-key-pair\") pod \"whisker-65f5db84d5-mpt4b\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " pod="calico-system/whisker-65f5db84d5-mpt4b" Apr 13 19:25:10.317550 kubelet[3617]: I0413 19:25:10.317442 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a02eceba-5363-4cf1-87ce-7f671e2cd0cc-config-volume\") pod \"coredns-674b8bbfcf-jpbcs\" (UID: \"a02eceba-5363-4cf1-87ce-7f671e2cd0cc\") " pod="kube-system/coredns-674b8bbfcf-jpbcs" Apr 13 19:25:10.318140 kubelet[3617]: I0413 19:25:10.317495 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftxsm\" (UniqueName: \"kubernetes.io/projected/a02eceba-5363-4cf1-87ce-7f671e2cd0cc-kube-api-access-ftxsm\") pod \"coredns-674b8bbfcf-jpbcs\" (UID: \"a02eceba-5363-4cf1-87ce-7f671e2cd0cc\") " pod="kube-system/coredns-674b8bbfcf-jpbcs" Apr 13 19:25:10.319112 kubelet[3617]: I0413 19:25:10.319043 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtgs8\" (UniqueName: \"kubernetes.io/projected/c8a1f799-8088-43db-b33b-f83deb990843-kube-api-access-jtgs8\") pod \"coredns-674b8bbfcf-rzcwq\" (UID: \"c8a1f799-8088-43db-b33b-f83deb990843\") " pod="kube-system/coredns-674b8bbfcf-rzcwq" Apr 13 19:25:10.319399 kubelet[3617]: I0413 19:25:10.319224 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49aa872e-1beb-42e6-8cd8-546921069c20-tigera-ca-bundle\") pod \"calico-kube-controllers-7949b6b746-86tp4\" (UID: \"49aa872e-1beb-42e6-8cd8-546921069c20\") " pod="calico-system/calico-kube-controllers-7949b6b746-86tp4" Apr 13 19:25:10.319742 kubelet[3617]: I0413 19:25:10.319485 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/702af813-e895-432b-a737-e7ecba2b6103-calico-apiserver-certs\") pod \"calico-apiserver-5d9697dc4b-j7tsc\" (UID: \"702af813-e895-432b-a737-e7ecba2b6103\") " pod="calico-system/calico-apiserver-5d9697dc4b-j7tsc" Apr 13 19:25:10.319952 kubelet[3617]: I0413 19:25:10.319840 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-ca-bundle\") pod \"whisker-65f5db84d5-mpt4b\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " pod="calico-system/whisker-65f5db84d5-mpt4b" Apr 13 19:25:10.320196 kubelet[3617]: I0413 19:25:10.320067 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrkng\" (UniqueName: \"kubernetes.io/projected/702af813-e895-432b-a737-e7ecba2b6103-kube-api-access-wrkng\") pod \"calico-apiserver-5d9697dc4b-j7tsc\" (UID: \"702af813-e895-432b-a737-e7ecba2b6103\") " pod="calico-system/calico-apiserver-5d9697dc4b-j7tsc" Apr 13 19:25:10.320395 kubelet[3617]: I0413 19:25:10.320154 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8a1f799-8088-43db-b33b-f83deb990843-config-volume\") pod \"coredns-674b8bbfcf-rzcwq\" (UID: \"c8a1f799-8088-43db-b33b-f83deb990843\") " pod="kube-system/coredns-674b8bbfcf-rzcwq" Apr 13 19:25:10.320559 kubelet[3617]: I0413 19:25:10.320337 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-nginx-config\") pod \"whisker-65f5db84d5-mpt4b\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " pod="calico-system/whisker-65f5db84d5-mpt4b" Apr 13 19:25:10.321228 kubelet[3617]: I0413 19:25:10.321196 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmv45\" (UniqueName: \"kubernetes.io/projected/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-kube-api-access-xmv45\") pod \"whisker-65f5db84d5-mpt4b\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " pod="calico-system/whisker-65f5db84d5-mpt4b" Apr 13 19:25:10.321572 kubelet[3617]: I0413 19:25:10.321494 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psthl\" (UniqueName: \"kubernetes.io/projected/49aa872e-1beb-42e6-8cd8-546921069c20-kube-api-access-psthl\") pod \"calico-kube-controllers-7949b6b746-86tp4\" (UID: \"49aa872e-1beb-42e6-8cd8-546921069c20\") " pod="calico-system/calico-kube-controllers-7949b6b746-86tp4" Apr 13 19:25:10.422850 kubelet[3617]: I0413 19:25:10.422670 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e86138-05a0-4187-9e4b-c73d50410649-config\") pod \"goldmane-5b85766d88-kpdfg\" (UID: \"76e86138-05a0-4187-9e4b-c73d50410649\") " pod="calico-system/goldmane-5b85766d88-kpdfg" Apr 13 19:25:10.424618 kubelet[3617]: I0413 19:25:10.423831 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5xn\" (UniqueName: \"kubernetes.io/projected/69ebf535-2663-4af7-9297-fd6777511804-kube-api-access-dk5xn\") pod \"calico-apiserver-5d9697dc4b-6kgd9\" (UID: \"69ebf535-2663-4af7-9297-fd6777511804\") " pod="calico-system/calico-apiserver-5d9697dc4b-6kgd9" Apr 13 19:25:10.424618 kubelet[3617]: I0413 19:25:10.424547 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/76e86138-05a0-4187-9e4b-c73d50410649-goldmane-key-pair\") pod \"goldmane-5b85766d88-kpdfg\" (UID: \"76e86138-05a0-4187-9e4b-c73d50410649\") " pod="calico-system/goldmane-5b85766d88-kpdfg" Apr 13 19:25:10.425912 kubelet[3617]: I0413 19:25:10.425750 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-744m8\" (UniqueName: \"kubernetes.io/projected/76e86138-05a0-4187-9e4b-c73d50410649-kube-api-access-744m8\") pod \"goldmane-5b85766d88-kpdfg\" (UID: \"76e86138-05a0-4187-9e4b-c73d50410649\") " pod="calico-system/goldmane-5b85766d88-kpdfg" Apr 13 19:25:10.446766 kubelet[3617]: I0413 19:25:10.446049 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76e86138-05a0-4187-9e4b-c73d50410649-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-kpdfg\" (UID: \"76e86138-05a0-4187-9e4b-c73d50410649\") " pod="calico-system/goldmane-5b85766d88-kpdfg" Apr 13 19:25:10.446766 kubelet[3617]: I0413 19:25:10.446163 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/69ebf535-2663-4af7-9297-fd6777511804-calico-apiserver-certs\") pod \"calico-apiserver-5d9697dc4b-6kgd9\" (UID: \"69ebf535-2663-4af7-9297-fd6777511804\") " pod="calico-system/calico-apiserver-5d9697dc4b-6kgd9" Apr 13 19:25:10.582080 containerd[2133]: time="2026-04-13T19:25:10.582002935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpbcs,Uid:a02eceba-5363-4cf1-87ce-7f671e2cd0cc,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:10.586633 containerd[2133]: time="2026-04-13T19:25:10.586021819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f5db84d5-mpt4b,Uid:3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:10.586953 containerd[2133]: time="2026-04-13T19:25:10.586901851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7949b6b746-86tp4,Uid:49aa872e-1beb-42e6-8cd8-546921069c20,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:10.597740 containerd[2133]: time="2026-04-13T19:25:10.597407779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rzcwq,Uid:c8a1f799-8088-43db-b33b-f83deb990843,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:10.606386 containerd[2133]: time="2026-04-13T19:25:10.606306067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-6kgd9,Uid:69ebf535-2663-4af7-9297-fd6777511804,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:10.608741 containerd[2133]: time="2026-04-13T19:25:10.608656975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-j7tsc,Uid:702af813-e895-432b-a737-e7ecba2b6103,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:10.611481 containerd[2133]: time="2026-04-13T19:25:10.611192599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kpdfg,Uid:76e86138-05a0-4187-9e4b-c73d50410649,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:10.653964 containerd[2133]: time="2026-04-13T19:25:10.653885155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pjcq,Uid:d14c8f4d-16d8-4d7e-83af-5e5a012516fe,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:10.979151 containerd[2133]: time="2026-04-13T19:25:10.979081353Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 19:25:11.050400 containerd[2133]: time="2026-04-13T19:25:11.050320517Z" level=info msg="CreateContainer within sandbox \"24a0b8fad73349d845da0ca90bae46e58da8b09d391bddb50d5f54c95fc607f4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"44f007bbf095e34772acd4dcba9a4a2a42c58fb7513c47811bc87f2a876c2794\"" Apr 13 19:25:11.052452 containerd[2133]: time="2026-04-13T19:25:11.052377953Z" level=info msg="StartContainer for \"44f007bbf095e34772acd4dcba9a4a2a42c58fb7513c47811bc87f2a876c2794\"" Apr 13 19:25:11.313332 systemd[1]: run-containerd-runc-k8s.io-44f007bbf095e34772acd4dcba9a4a2a42c58fb7513c47811bc87f2a876c2794-runc.lwdtEJ.mount: Deactivated successfully. Apr 13 19:25:11.398627 containerd[2133]: time="2026-04-13T19:25:11.392883331Z" level=error msg="Failed to destroy network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.406996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a-shm.mount: Deactivated successfully. Apr 13 19:25:11.412952 containerd[2133]: time="2026-04-13T19:25:11.401929507Z" level=error msg="encountered an error cleaning up failed sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.413853 containerd[2133]: time="2026-04-13T19:25:11.413246095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpbcs,Uid:a02eceba-5363-4cf1-87ce-7f671e2cd0cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.414031 kubelet[3617]: E0413 19:25:11.413539 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.414031 kubelet[3617]: E0413 19:25:11.413625 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jpbcs" Apr 13 19:25:11.414031 kubelet[3617]: E0413 19:25:11.413661 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jpbcs" Apr 13 19:25:11.416414 kubelet[3617]: E0413 19:25:11.413787 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jpbcs_kube-system(a02eceba-5363-4cf1-87ce-7f671e2cd0cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jpbcs_kube-system(a02eceba-5363-4cf1-87ce-7f671e2cd0cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jpbcs" podUID="a02eceba-5363-4cf1-87ce-7f671e2cd0cc" Apr 13 19:25:11.445178 containerd[2133]: time="2026-04-13T19:25:11.444812143Z" level=error msg="Failed to destroy network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.455807 containerd[2133]: time="2026-04-13T19:25:11.451399135Z" level=error msg="encountered an error cleaning up failed sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.455807 containerd[2133]: time="2026-04-13T19:25:11.451524367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f5db84d5-mpt4b,Uid:3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.456088 kubelet[3617]: E0413 19:25:11.453494 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.456088 kubelet[3617]: E0413 19:25:11.453578 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65f5db84d5-mpt4b" Apr 13 19:25:11.456088 kubelet[3617]: E0413 19:25:11.453611 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65f5db84d5-mpt4b" Apr 13 19:25:11.456275 kubelet[3617]: E0413 19:25:11.453715 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65f5db84d5-mpt4b_calico-system(3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65f5db84d5-mpt4b_calico-system(3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65f5db84d5-mpt4b" podUID="3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8" Apr 13 19:25:11.466534 containerd[2133]: time="2026-04-13T19:25:11.466393015Z" level=error msg="Failed to destroy network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.470091 containerd[2133]: time="2026-04-13T19:25:11.470008939Z" level=error msg="encountered an error cleaning up failed sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.470241 containerd[2133]: time="2026-04-13T19:25:11.470112787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7949b6b746-86tp4,Uid:49aa872e-1beb-42e6-8cd8-546921069c20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.470554 kubelet[3617]: E0413 19:25:11.470426 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.470554 kubelet[3617]: E0413 19:25:11.470503 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7949b6b746-86tp4" Apr 13 19:25:11.470554 kubelet[3617]: E0413 19:25:11.470536 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7949b6b746-86tp4" Apr 13 19:25:11.471913 kubelet[3617]: E0413 19:25:11.470628 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7949b6b746-86tp4_calico-system(49aa872e-1beb-42e6-8cd8-546921069c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7949b6b746-86tp4_calico-system(49aa872e-1beb-42e6-8cd8-546921069c20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7949b6b746-86tp4" podUID="49aa872e-1beb-42e6-8cd8-546921069c20" Apr 13 19:25:11.475199 containerd[2133]: time="2026-04-13T19:25:11.474780187Z" level=error msg="Failed to destroy network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.477299 containerd[2133]: time="2026-04-13T19:25:11.476256031Z" level=error msg="encountered an error cleaning up failed sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.477299 containerd[2133]: time="2026-04-13T19:25:11.476364799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pjcq,Uid:d14c8f4d-16d8-4d7e-83af-5e5a012516fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.477507 kubelet[3617]: E0413 19:25:11.477051 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.479322 kubelet[3617]: E0413 19:25:11.477262 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pjcq" Apr 13 19:25:11.479322 kubelet[3617]: E0413 19:25:11.477640 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pjcq" Apr 13 19:25:11.479322 kubelet[3617]: E0413 19:25:11.477785 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pjcq_calico-system(d14c8f4d-16d8-4d7e-83af-5e5a012516fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pjcq_calico-system(d14c8f4d-16d8-4d7e-83af-5e5a012516fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pjcq" podUID="d14c8f4d-16d8-4d7e-83af-5e5a012516fe" Apr 13 19:25:11.500722 containerd[2133]: time="2026-04-13T19:25:11.499978003Z" level=error msg="Failed to destroy network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.500722 containerd[2133]: time="2026-04-13T19:25:11.500591839Z" level=error msg="encountered an error cleaning up failed sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.500935 containerd[2133]: time="2026-04-13T19:25:11.500672419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-6kgd9,Uid:69ebf535-2663-4af7-9297-fd6777511804,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.501330 kubelet[3617]: E0413 19:25:11.501262 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.501437 kubelet[3617]: E0413 19:25:11.501356 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d9697dc4b-6kgd9" Apr 13 19:25:11.501437 kubelet[3617]: E0413 19:25:11.501391 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d9697dc4b-6kgd9" Apr 13 19:25:11.501555 kubelet[3617]: E0413 19:25:11.501473 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9697dc4b-6kgd9_calico-system(69ebf535-2663-4af7-9297-fd6777511804)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9697dc4b-6kgd9_calico-system(69ebf535-2663-4af7-9297-fd6777511804)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5d9697dc4b-6kgd9" podUID="69ebf535-2663-4af7-9297-fd6777511804" Apr 13 19:25:11.510099 containerd[2133]: time="2026-04-13T19:25:11.509915864Z" level=error msg="Failed to destroy network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.512409 containerd[2133]: time="2026-04-13T19:25:11.512102516Z" level=error msg="encountered an error cleaning up failed sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.513020 containerd[2133]: time="2026-04-13T19:25:11.512948816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-j7tsc,Uid:702af813-e895-432b-a737-e7ecba2b6103,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.513709 kubelet[3617]: E0413 19:25:11.513371 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.513709 kubelet[3617]: E0413 19:25:11.513447 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d9697dc4b-j7tsc" Apr 13 19:25:11.513709 kubelet[3617]: E0413 19:25:11.513492 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d9697dc4b-j7tsc" Apr 13 19:25:11.516118 kubelet[3617]: E0413 19:25:11.513582 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9697dc4b-j7tsc_calico-system(702af813-e895-432b-a737-e7ecba2b6103)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9697dc4b-j7tsc_calico-system(702af813-e895-432b-a737-e7ecba2b6103)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5d9697dc4b-j7tsc" podUID="702af813-e895-432b-a737-e7ecba2b6103" Apr 13 19:25:11.518037 containerd[2133]: time="2026-04-13T19:25:11.515845124Z" level=error msg="Failed to destroy network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.518037 containerd[2133]: time="2026-04-13T19:25:11.517107152Z" level=error msg="encountered an error cleaning up failed sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.518872 containerd[2133]: time="2026-04-13T19:25:11.517182620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rzcwq,Uid:c8a1f799-8088-43db-b33b-f83deb990843,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.520173 kubelet[3617]: E0413 19:25:11.519319 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.520173 kubelet[3617]: E0413 19:25:11.519398 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rzcwq" Apr 13 19:25:11.520173 kubelet[3617]: E0413 19:25:11.519433 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rzcwq" Apr 13 19:25:11.520435 kubelet[3617]: E0413 19:25:11.519521 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rzcwq_kube-system(c8a1f799-8088-43db-b33b-f83deb990843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rzcwq_kube-system(c8a1f799-8088-43db-b33b-f83deb990843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rzcwq" podUID="c8a1f799-8088-43db-b33b-f83deb990843" Apr 13 19:25:11.535963 containerd[2133]: time="2026-04-13T19:25:11.535883804Z" level=error msg="Failed to destroy network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.537166 containerd[2133]: time="2026-04-13T19:25:11.537093800Z" level=error msg="encountered an error cleaning up failed sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.537355 containerd[2133]: time="2026-04-13T19:25:11.537184340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kpdfg,Uid:76e86138-05a0-4187-9e4b-c73d50410649,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.539553 kubelet[3617]: E0413 19:25:11.537798 3617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:11.539553 kubelet[3617]: E0413 19:25:11.537875 3617 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-kpdfg" Apr 13 19:25:11.539553 kubelet[3617]: E0413 19:25:11.537909 3617 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-kpdfg" Apr 13 19:25:11.539957 kubelet[3617]: E0413 19:25:11.537987 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-kpdfg_calico-system(76e86138-05a0-4187-9e4b-c73d50410649)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-kpdfg_calico-system(76e86138-05a0-4187-9e4b-c73d50410649)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-kpdfg" podUID="76e86138-05a0-4187-9e4b-c73d50410649" Apr 13 19:25:11.552273 containerd[2133]: time="2026-04-13T19:25:11.552205844Z" level=info msg="StartContainer for \"44f007bbf095e34772acd4dcba9a4a2a42c58fb7513c47811bc87f2a876c2794\" returns successfully" Apr 13 19:25:11.926308 kubelet[3617]: I0413 19:25:11.926259 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:11.933526 containerd[2133]: time="2026-04-13T19:25:11.932730070Z" level=info msg="StopPodSandbox for \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\"" Apr 13 19:25:11.933526 containerd[2133]: time="2026-04-13T19:25:11.933062854Z" level=info msg="Ensure that sandbox 514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69 in task-service has been cleanup successfully" Apr 13 19:25:11.938798 kubelet[3617]: I0413 19:25:11.938747 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:11.943715 containerd[2133]: time="2026-04-13T19:25:11.943067278Z" level=info msg="StopPodSandbox for \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\"" Apr 13 19:25:11.944875 containerd[2133]: time="2026-04-13T19:25:11.944805886Z" level=info msg="Ensure that sandbox 1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a in task-service has been cleanup successfully" Apr 13 19:25:11.953380 kubelet[3617]: I0413 19:25:11.951710 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:11.961726 containerd[2133]: time="2026-04-13T19:25:11.961616614Z" level=info msg="StopPodSandbox for \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\"" Apr 13 19:25:11.962139 containerd[2133]: time="2026-04-13T19:25:11.962087506Z" level=info msg="Ensure that sandbox 36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db in task-service has been cleanup successfully" Apr 13 19:25:11.974209 kubelet[3617]: I0413 19:25:11.972911 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zpmxb" podStartSLOduration=5.583536352 podStartE2EDuration="21.972861094s" podCreationTimestamp="2026-04-13 19:24:50 +0000 UTC" firstStartedPulling="2026-04-13 19:24:51.719066677 +0000 UTC m=+29.365137651" lastFinishedPulling="2026-04-13 19:25:08.108391419 +0000 UTC m=+45.754462393" observedRunningTime="2026-04-13 19:25:11.966783622 +0000 UTC m=+49.612854620" watchObservedRunningTime="2026-04-13 19:25:11.972861094 +0000 UTC m=+49.618932080" Apr 13 19:25:11.978014 kubelet[3617]: I0413 19:25:11.977979 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:11.983052 containerd[2133]: time="2026-04-13T19:25:11.982126174Z" level=info msg="StopPodSandbox for \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\"" Apr 13 19:25:11.983052 containerd[2133]: time="2026-04-13T19:25:11.982467370Z" level=info msg="Ensure that sandbox aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671 in task-service has been cleanup successfully" Apr 13 19:25:12.004243 kubelet[3617]: I0413 19:25:12.004191 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:12.011666 containerd[2133]: time="2026-04-13T19:25:12.011141610Z" level=info msg="StopPodSandbox for \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\"" Apr 13 19:25:12.011666 containerd[2133]: time="2026-04-13T19:25:12.011446254Z" level=info msg="Ensure that sandbox 8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61 in task-service has been cleanup successfully" Apr 13 19:25:12.027066 containerd[2133]: time="2026-04-13T19:25:12.027014214Z" level=info msg="StopPodSandbox for \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\"" Apr 13 19:25:12.027814 kubelet[3617]: I0413 19:25:12.027730 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:12.029130 containerd[2133]: time="2026-04-13T19:25:12.028889886Z" level=info msg="Ensure that sandbox fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05 in task-service has been cleanup successfully" Apr 13 19:25:12.054792 kubelet[3617]: I0413 19:25:12.047898 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:12.060131 containerd[2133]: time="2026-04-13T19:25:12.060037350Z" level=info msg="StopPodSandbox for \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\"" Apr 13 19:25:12.063830 containerd[2133]: time="2026-04-13T19:25:12.063591186Z" level=info msg="Ensure that sandbox d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79 in task-service has been cleanup successfully" Apr 13 19:25:12.077006 kubelet[3617]: I0413 19:25:12.076948 3617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:12.090804 containerd[2133]: time="2026-04-13T19:25:12.089613678Z" level=info msg="StopPodSandbox for \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\"" Apr 13 19:25:12.090804 containerd[2133]: time="2026-04-13T19:25:12.089971158Z" level=info msg="Ensure that sandbox 9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488 in task-service has been cleanup successfully" Apr 13 19:25:12.139831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61-shm.mount: Deactivated successfully. Apr 13 19:25:12.141817 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79-shm.mount: Deactivated successfully. Apr 13 19:25:12.143065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69-shm.mount: Deactivated successfully. Apr 13 19:25:12.146948 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05-shm.mount: Deactivated successfully. Apr 13 19:25:12.147226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db-shm.mount: Deactivated successfully. Apr 13 19:25:12.147469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671-shm.mount: Deactivated successfully. Apr 13 19:25:12.148863 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488-shm.mount: Deactivated successfully. Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.661 [INFO][4786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.664 [INFO][4786] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" iface="eth0" netns="/var/run/netns/cni-63145a60-7d3b-cbb6-9306-64b1190f545a" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.664 [INFO][4786] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" iface="eth0" netns="/var/run/netns/cni-63145a60-7d3b-cbb6-9306-64b1190f545a" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.684 [INFO][4786] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" iface="eth0" netns="/var/run/netns/cni-63145a60-7d3b-cbb6-9306-64b1190f545a" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.684 [INFO][4786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.684 [INFO][4786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.841 [INFO][4889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.842 [INFO][4889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.842 [INFO][4889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.856 [WARNING][4889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.857 [INFO][4889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.860 [INFO][4889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:12.904635 containerd[2133]: 2026-04-13 19:25:12.877 [INFO][4786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:12.912507 containerd[2133]: time="2026-04-13T19:25:12.911839150Z" level=info msg="TearDown network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\" successfully" Apr 13 19:25:12.912507 containerd[2133]: time="2026-04-13T19:25:12.911889742Z" level=info msg="StopPodSandbox for \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\" returns successfully" Apr 13 19:25:12.915212 systemd[1]: run-netns-cni\x2d63145a60\x2d7d3b\x2dcbb6\x2d9306\x2d64b1190f545a.mount: Deactivated successfully. Apr 13 19:25:12.917928 containerd[2133]: time="2026-04-13T19:25:12.917477867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7949b6b746-86tp4,Uid:49aa872e-1beb-42e6-8cd8-546921069c20,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.669 [INFO][4804] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.670 [INFO][4804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" iface="eth0" netns="/var/run/netns/cni-b8406c7a-4cfb-f352-1762-361fd3eb7c91" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.670 [INFO][4804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" iface="eth0" netns="/var/run/netns/cni-b8406c7a-4cfb-f352-1762-361fd3eb7c91" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.670 [INFO][4804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" iface="eth0" netns="/var/run/netns/cni-b8406c7a-4cfb-f352-1762-361fd3eb7c91" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.670 [INFO][4804] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.671 [INFO][4804] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.841 [INFO][4884] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.843 [INFO][4884] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.864 [INFO][4884] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.881 [WARNING][4884] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.882 [INFO][4884] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.884 [INFO][4884] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:12.921968 containerd[2133]: 2026-04-13 19:25:12.907 [INFO][4804] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:12.924891 containerd[2133]: time="2026-04-13T19:25:12.924591155Z" level=info msg="TearDown network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\" successfully" Apr 13 19:25:12.924891 containerd[2133]: time="2026-04-13T19:25:12.924650615Z" level=info msg="StopPodSandbox for \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\" returns successfully" Apr 13 19:25:12.928397 containerd[2133]: time="2026-04-13T19:25:12.928155575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-6kgd9,Uid:69ebf535-2663-4af7-9297-fd6777511804,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:12.935358 systemd[1]: run-netns-cni\x2db8406c7a\x2d4cfb\x2df352\x2d1762\x2d361fd3eb7c91.mount: Deactivated successfully. Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.557 [INFO][4798] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.562 [INFO][4798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" iface="eth0" netns="/var/run/netns/cni-606188b0-0ca5-2bac-b667-87d2ef189d88" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.563 [INFO][4798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" iface="eth0" netns="/var/run/netns/cni-606188b0-0ca5-2bac-b667-87d2ef189d88" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.580 [INFO][4798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" iface="eth0" netns="/var/run/netns/cni-606188b0-0ca5-2bac-b667-87d2ef189d88" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.582 [INFO][4798] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.583 [INFO][4798] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.883 [INFO][4853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.886 [INFO][4853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.886 [INFO][4853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.936 [WARNING][4853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.936 [INFO][4853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.940 [INFO][4853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:12.967919 containerd[2133]: 2026-04-13 19:25:12.951 [INFO][4798] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:12.969090 containerd[2133]: time="2026-04-13T19:25:12.968087543Z" level=info msg="TearDown network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\" successfully" Apr 13 19:25:12.969090 containerd[2133]: time="2026-04-13T19:25:12.968128871Z" level=info msg="StopPodSandbox for \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\" returns successfully" Apr 13 19:25:12.975829 containerd[2133]: time="2026-04-13T19:25:12.973821443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pjcq,Uid:d14c8f4d-16d8-4d7e-83af-5e5a012516fe,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:12.982147 systemd[1]: run-netns-cni\x2d606188b0\x2d0ca5\x2d2bac\x2db667\x2d87d2ef189d88.mount: Deactivated successfully. Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.575 [INFO][4756] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.585 [INFO][4756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" iface="eth0" netns="/var/run/netns/cni-b635acea-6627-0d0a-5cec-168491324b66" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.586 [INFO][4756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" iface="eth0" netns="/var/run/netns/cni-b635acea-6627-0d0a-5cec-168491324b66" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.586 [INFO][4756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" iface="eth0" netns="/var/run/netns/cni-b635acea-6627-0d0a-5cec-168491324b66" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.586 [INFO][4756] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.586 [INFO][4756] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.908 [INFO][4859] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.908 [INFO][4859] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.941 [INFO][4859] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.972 [WARNING][4859] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.972 [INFO][4859] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.979 [INFO][4859] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.001806 containerd[2133]: 2026-04-13 19:25:12.992 [INFO][4756] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:13.006085 containerd[2133]: time="2026-04-13T19:25:13.005887819Z" level=info msg="TearDown network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\" successfully" Apr 13 19:25:13.006085 containerd[2133]: time="2026-04-13T19:25:13.005944387Z" level=info msg="StopPodSandbox for \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\" returns successfully" Apr 13 19:25:13.014728 containerd[2133]: time="2026-04-13T19:25:13.014644879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rzcwq,Uid:c8a1f799-8088-43db-b33b-f83deb990843,Namespace:kube-system,Attempt:1,}" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.584 [INFO][4808] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.587 [INFO][4808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" iface="eth0" netns="/var/run/netns/cni-f419cbc6-f69c-aa63-97fc-9c0a2be140ad" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.588 [INFO][4808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" iface="eth0" netns="/var/run/netns/cni-f419cbc6-f69c-aa63-97fc-9c0a2be140ad" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.591 [INFO][4808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" iface="eth0" netns="/var/run/netns/cni-f419cbc6-f69c-aa63-97fc-9c0a2be140ad" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.591 [INFO][4808] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.591 [INFO][4808] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.933 [INFO][4860] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.933 [INFO][4860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:12.979 [INFO][4860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:13.010 [WARNING][4860] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:13.010 [INFO][4860] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:13.014 [INFO][4860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.029960 containerd[2133]: 2026-04-13 19:25:13.023 [INFO][4808] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:13.035949 containerd[2133]: time="2026-04-13T19:25:13.035859811Z" level=info msg="TearDown network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\" successfully" Apr 13 19:25:13.039381 containerd[2133]: time="2026-04-13T19:25:13.038999851Z" level=info msg="StopPodSandbox for \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\" returns successfully" Apr 13 19:25:13.043620 containerd[2133]: time="2026-04-13T19:25:13.043197751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kpdfg,Uid:76e86138-05a0-4187-9e4b-c73d50410649,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.615 [INFO][4807] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.616 [INFO][4807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" iface="eth0" netns="/var/run/netns/cni-7e5646ab-e5b3-de4d-0bdc-4caa624ffcbc" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.616 [INFO][4807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" iface="eth0" netns="/var/run/netns/cni-7e5646ab-e5b3-de4d-0bdc-4caa624ffcbc" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.617 [INFO][4807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" iface="eth0" netns="/var/run/netns/cni-7e5646ab-e5b3-de4d-0bdc-4caa624ffcbc" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.617 [INFO][4807] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.617 [INFO][4807] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.970 [INFO][4874] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:12.970 [INFO][4874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:13.016 [INFO][4874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:13.040 [WARNING][4874] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:13.040 [INFO][4874] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:13.045 [INFO][4874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.067979 containerd[2133]: 2026-04-13 19:25:13.057 [INFO][4807] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:13.071069 containerd[2133]: time="2026-04-13T19:25:13.070871959Z" level=info msg="TearDown network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\" successfully" Apr 13 19:25:13.071069 containerd[2133]: time="2026-04-13T19:25:13.070937815Z" level=info msg="StopPodSandbox for \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\" returns successfully" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.557 [INFO][4741] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.558 [INFO][4741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" iface="eth0" netns="/var/run/netns/cni-c91dc24d-67e3-f9f6-8698-3c8ace1ed07d" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.568 [INFO][4741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" iface="eth0" netns="/var/run/netns/cni-c91dc24d-67e3-f9f6-8698-3c8ace1ed07d" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.586 [INFO][4741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" iface="eth0" netns="/var/run/netns/cni-c91dc24d-67e3-f9f6-8698-3c8ace1ed07d" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.586 [INFO][4741] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.587 [INFO][4741] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.978 [INFO][4857] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:12.990 [INFO][4857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:13.044 [INFO][4857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:13.072 [WARNING][4857] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:13.074 [INFO][4857] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:13.082 [INFO][4857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.120964 containerd[2133]: 2026-04-13 19:25:13.105 [INFO][4741] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:13.124550 containerd[2133]: time="2026-04-13T19:25:13.124214492Z" level=info msg="TearDown network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\" successfully" Apr 13 19:25:13.124550 containerd[2133]: time="2026-04-13T19:25:13.124288748Z" level=info msg="StopPodSandbox for \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\" returns successfully" Apr 13 19:25:13.128340 containerd[2133]: time="2026-04-13T19:25:13.128003396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-j7tsc,Uid:702af813-e895-432b-a737-e7ecba2b6103,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:13.154480 systemd[1]: run-netns-cni\x2df419cbc6\x2df69c\x2daa63\x2d97fc\x2d9c0a2be140ad.mount: Deactivated successfully. Apr 13 19:25:13.155325 systemd[1]: run-netns-cni\x2dc91dc24d\x2d67e3\x2df9f6\x2d8698\x2d3c8ace1ed07d.mount: Deactivated successfully. Apr 13 19:25:13.155564 systemd[1]: run-netns-cni\x2db635acea\x2d6627\x2d0d0a\x2d5cec\x2d168491324b66.mount: Deactivated successfully. Apr 13 19:25:13.156206 systemd[1]: run-netns-cni\x2d7e5646ab\x2de5b3\x2dde4d\x2d0bdc\x2d4caa624ffcbc.mount: Deactivated successfully. Apr 13 19:25:13.187089 kubelet[3617]: I0413 19:25:13.186167 3617 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-nginx-config\") pod \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " Apr 13 19:25:13.187089 kubelet[3617]: I0413 19:25:13.186253 3617 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-backend-key-pair\") pod \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " Apr 13 19:25:13.187089 kubelet[3617]: I0413 19:25:13.186300 3617 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmv45\" (UniqueName: \"kubernetes.io/projected/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-kube-api-access-xmv45\") pod \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " Apr 13 19:25:13.187089 kubelet[3617]: I0413 19:25:13.186379 3617 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-ca-bundle\") pod \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\" (UID: \"3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8\") " Apr 13 19:25:13.190960 kubelet[3617]: I0413 19:25:13.187201 3617 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8" (UID: "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:25:13.190960 kubelet[3617]: I0413 19:25:13.188271 3617 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8" (UID: "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:25:13.212287 systemd[1]: var-lib-kubelet-pods-3b2c46c5\x2d8a35\x2d4fe2\x2d9019\x2dc1fa1dae11c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxmv45.mount: Deactivated successfully. Apr 13 19:25:13.212934 systemd[1]: var-lib-kubelet-pods-3b2c46c5\x2d8a35\x2d4fe2\x2d9019\x2dc1fa1dae11c8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.563 [INFO][4746] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.564 [INFO][4746] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" iface="eth0" netns="/var/run/netns/cni-c4bf830b-f4ee-8c8c-e9b7-f93db5062943" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.566 [INFO][4746] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" iface="eth0" netns="/var/run/netns/cni-c4bf830b-f4ee-8c8c-e9b7-f93db5062943" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.575 [INFO][4746] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" iface="eth0" netns="/var/run/netns/cni-c4bf830b-f4ee-8c8c-e9b7-f93db5062943" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.576 [INFO][4746] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.583 [INFO][4746] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.981 [INFO][4852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:12.996 [INFO][4852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:13.082 [INFO][4852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:13.118 [WARNING][4852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:13.119 [INFO][4852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:13.121 [INFO][4852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.220586 containerd[2133]: 2026-04-13 19:25:13.146 [INFO][4746] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:13.223176 kubelet[3617]: I0413 19:25:13.222901 3617 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-kube-api-access-xmv45" (OuterVolumeSpecName: "kube-api-access-xmv45") pod "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8" (UID: "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8"). InnerVolumeSpecName "kube-api-access-xmv45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:25:13.232999 kubelet[3617]: I0413 19:25:13.232925 3617 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8" (UID: "3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:25:13.233533 systemd[1]: run-netns-cni\x2dc4bf830b\x2df4ee\x2d8c8c\x2de9b7\x2df93db5062943.mount: Deactivated successfully. Apr 13 19:25:13.243104 containerd[2133]: time="2026-04-13T19:25:13.243032540Z" level=info msg="TearDown network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\" successfully" Apr 13 19:25:13.243308 containerd[2133]: time="2026-04-13T19:25:13.243264152Z" level=info msg="StopPodSandbox for \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\" returns successfully" Apr 13 19:25:13.254269 containerd[2133]: time="2026-04-13T19:25:13.254091668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpbcs,Uid:a02eceba-5363-4cf1-87ce-7f671e2cd0cc,Namespace:kube-system,Attempt:1,}" Apr 13 19:25:13.293637 kubelet[3617]: I0413 19:25:13.293577 3617 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-ca-bundle\") on node \"ip-172-31-26-195\" DevicePath \"\"" Apr 13 19:25:13.293637 kubelet[3617]: I0413 19:25:13.293634 3617 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-nginx-config\") on node \"ip-172-31-26-195\" DevicePath \"\"" Apr 13 19:25:13.294280 kubelet[3617]: I0413 19:25:13.293660 3617 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-whisker-backend-key-pair\") on node \"ip-172-31-26-195\" DevicePath \"\"" Apr 13 19:25:13.294931 kubelet[3617]: I0413 19:25:13.293712 3617 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmv45\" (UniqueName: \"kubernetes.io/projected/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8-kube-api-access-xmv45\") on node \"ip-172-31-26-195\" DevicePath \"\"" Apr 13 19:25:14.410070 kubelet[3617]: I0413 19:25:14.409413 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5e8dac51-94d4-4e39-aa6a-5d64352e0aa0-nginx-config\") pod \"whisker-79f47f5d66-bmfqd\" (UID: \"5e8dac51-94d4-4e39-aa6a-5d64352e0aa0\") " pod="calico-system/whisker-79f47f5d66-bmfqd" Apr 13 19:25:14.412894 kubelet[3617]: I0413 19:25:14.412015 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e8dac51-94d4-4e39-aa6a-5d64352e0aa0-whisker-backend-key-pair\") pod \"whisker-79f47f5d66-bmfqd\" (UID: \"5e8dac51-94d4-4e39-aa6a-5d64352e0aa0\") " pod="calico-system/whisker-79f47f5d66-bmfqd" Apr 13 19:25:14.412894 kubelet[3617]: I0413 19:25:14.412115 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e8dac51-94d4-4e39-aa6a-5d64352e0aa0-whisker-ca-bundle\") pod \"whisker-79f47f5d66-bmfqd\" (UID: \"5e8dac51-94d4-4e39-aa6a-5d64352e0aa0\") " pod="calico-system/whisker-79f47f5d66-bmfqd" Apr 13 19:25:14.412894 kubelet[3617]: I0413 19:25:14.412231 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddxhz\" (UniqueName: \"kubernetes.io/projected/5e8dac51-94d4-4e39-aa6a-5d64352e0aa0-kube-api-access-ddxhz\") pod \"whisker-79f47f5d66-bmfqd\" (UID: \"5e8dac51-94d4-4e39-aa6a-5d64352e0aa0\") " pod="calico-system/whisker-79f47f5d66-bmfqd" Apr 13 19:25:14.638709 containerd[2133]: time="2026-04-13T19:25:14.638018567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f47f5d66-bmfqd,Uid:5e8dac51-94d4-4e39-aa6a-5d64352e0aa0,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:14.663017 kubelet[3617]: I0413 19:25:14.659459 3617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8" path="/var/lib/kubelet/pods/3b2c46c5-8a35-4fe2-9019-c1fa1dae11c8/volumes" Apr 13 19:25:14.729981 (udev-worker)[5143]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:14.733511 systemd-networkd[1694]: cali53dfae7814d: Link UP Apr 13 19:25:14.743147 systemd-networkd[1694]: cali53dfae7814d: Gained carrier Apr 13 19:25:14.856436 systemd-networkd[1694]: cali729c491e1d3: Link UP Apr 13 19:25:14.858363 systemd-networkd[1694]: cali729c491e1d3: Gained carrier Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:13.469 [ERROR][4941] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:13.534 [INFO][4941] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0 coredns-674b8bbfcf- kube-system c8a1f799-8088-43db-b33b-f83deb990843 927 0 2026-04-13 19:24:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-195 coredns-674b8bbfcf-rzcwq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53dfae7814d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:13.534 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.217 [INFO][5039] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" HandleID="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.328 [INFO][5039] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" HandleID="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039df50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-195", "pod":"coredns-674b8bbfcf-rzcwq", "timestamp":"2026-04-13 19:25:14.217654581 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a0f20)} Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.328 [INFO][5039] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.345 [INFO][5039] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.345 [INFO][5039] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.372 [INFO][5039] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.429 [INFO][5039] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.461 [INFO][5039] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.468 [INFO][5039] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.480 [INFO][5039] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.482 [INFO][5039] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.486 [INFO][5039] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354 Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.500 [INFO][5039] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.536 [INFO][5039] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.129/26] block=192.168.97.128/26 handle="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.539 [INFO][5039] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.129/26] handle="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" host="ip-172-31-26-195" Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.542 [INFO][5039] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:14.925841 containerd[2133]: 2026-04-13 19:25:14.549 [INFO][5039] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.129/26] IPv6=[] ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" HandleID="k8s-pod-network.7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:14.927837 containerd[2133]: 2026-04-13 19:25:14.632 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8a1f799-8088-43db-b33b-f83deb990843", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"coredns-674b8bbfcf-rzcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53dfae7814d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.927837 containerd[2133]: 2026-04-13 19:25:14.632 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.129/32] ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:14.927837 containerd[2133]: 2026-04-13 19:25:14.632 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53dfae7814d ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:14.927837 containerd[2133]: 2026-04-13 19:25:14.788 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:14.927837 containerd[2133]: 2026-04-13 19:25:14.821 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8a1f799-8088-43db-b33b-f83deb990843", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354", Pod:"coredns-674b8bbfcf-rzcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53dfae7814d", MAC:"72:e4:58:26:d1:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.927837 containerd[2133]: 2026-04-13 19:25:14.881 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354" Namespace="kube-system" Pod="coredns-674b8bbfcf-rzcwq" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:13.324 [ERROR][4908] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:13.394 [INFO][4908] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0 calico-kube-controllers-7949b6b746- calico-system 49aa872e-1beb-42e6-8cd8-546921069c20 930 0 2026-04-13 19:24:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7949b6b746 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-195 calico-kube-controllers-7949b6b746-86tp4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali729c491e1d3 [] [] }} ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:13.394 [INFO][4908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.347 [INFO][4987] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" HandleID="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.422 [INFO][4987] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" HandleID="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e73d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-195", "pod":"calico-kube-controllers-7949b6b746-86tp4", "timestamp":"2026-04-13 19:25:14.34791307 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001ec580)} Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.422 [INFO][4987] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.569 [INFO][4987] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.569 [INFO][4987] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.577 [INFO][4987] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.605 [INFO][4987] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.625 [INFO][4987] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.630 [INFO][4987] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.637 [INFO][4987] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.642 [INFO][4987] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.646 [INFO][4987] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054 Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.665 [INFO][4987] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.703 [INFO][4987] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.130/26] block=192.168.97.128/26 handle="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.703 [INFO][4987] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.130/26] handle="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" host="ip-172-31-26-195" Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.703 [INFO][4987] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:14.992530 containerd[2133]: 2026-04-13 19:25:14.703 [INFO][4987] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.130/26] IPv6=[] ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" HandleID="k8s-pod-network.62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:14.998044 containerd[2133]: 2026-04-13 19:25:14.838 [INFO][4908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0", GenerateName:"calico-kube-controllers-7949b6b746-", Namespace:"calico-system", SelfLink:"", UID:"49aa872e-1beb-42e6-8cd8-546921069c20", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7949b6b746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"calico-kube-controllers-7949b6b746-86tp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali729c491e1d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.998044 containerd[2133]: 2026-04-13 19:25:14.839 [INFO][4908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.130/32] ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:14.998044 containerd[2133]: 2026-04-13 19:25:14.839 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali729c491e1d3 ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:14.998044 containerd[2133]: 2026-04-13 19:25:14.884 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:14.998044 containerd[2133]: 2026-04-13 19:25:14.886 [INFO][4908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0", GenerateName:"calico-kube-controllers-7949b6b746-", Namespace:"calico-system", SelfLink:"", UID:"49aa872e-1beb-42e6-8cd8-546921069c20", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7949b6b746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054", Pod:"calico-kube-controllers-7949b6b746-86tp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali729c491e1d3", MAC:"42:61:32:07:d1:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.998044 containerd[2133]: 2026-04-13 19:25:14.958 [INFO][4908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054" Namespace="calico-system" Pod="calico-kube-controllers-7949b6b746-86tp4" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:15.084452 systemd-networkd[1694]: cali666bb30349a: Link UP Apr 13 19:25:15.119450 systemd-networkd[1694]: cali666bb30349a: Gained carrier Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:13.371 [ERROR][4909] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:13.469 [INFO][4909] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0 calico-apiserver-5d9697dc4b- calico-system 69ebf535-2663-4af7-9297-fd6777511804 931 0 2026-04-13 19:24:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d9697dc4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-195 calico-apiserver-5d9697dc4b-6kgd9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali666bb30349a [] [] }} ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:13.469 [INFO][4909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.323 [INFO][5009] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" HandleID="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.437 [INFO][5009] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" HandleID="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000355d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-195", "pod":"calico-apiserver-5d9697dc4b-6kgd9", "timestamp":"2026-04-13 19:25:14.307643121 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400051b4a0)} Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.437 [INFO][5009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.739 [INFO][5009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.748 [INFO][5009] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.781 [INFO][5009] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.870 [INFO][5009] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.905 [INFO][5009] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.911 [INFO][5009] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.936 [INFO][5009] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.938 [INFO][5009] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.944 [INFO][5009] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:14.984 [INFO][5009] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:15.003 [INFO][5009] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.131/26] block=192.168.97.128/26 handle="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:15.005 [INFO][5009] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.131/26] handle="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" host="ip-172-31-26-195" Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:15.006 [INFO][5009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:15.235256 containerd[2133]: 2026-04-13 19:25:15.008 [INFO][5009] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.131/26] IPv6=[] ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" HandleID="k8s-pod-network.430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:15.245324 containerd[2133]: 2026-04-13 19:25:15.042 [INFO][4909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"69ebf535-2663-4af7-9297-fd6777511804", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"calico-apiserver-5d9697dc4b-6kgd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali666bb30349a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.245324 containerd[2133]: 2026-04-13 19:25:15.042 [INFO][4909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.131/32] ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:15.245324 containerd[2133]: 2026-04-13 19:25:15.042 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali666bb30349a ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:15.245324 containerd[2133]: 2026-04-13 19:25:15.122 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:15.245324 containerd[2133]: 2026-04-13 19:25:15.151 [INFO][4909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"69ebf535-2663-4af7-9297-fd6777511804", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f", Pod:"calico-apiserver-5d9697dc4b-6kgd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali666bb30349a", MAC:"06:7d:96:15:72:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.245324 containerd[2133]: 2026-04-13 19:25:15.200 [INFO][4909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-6kgd9" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:15.267675 systemd-networkd[1694]: cali4279c56c6d4: Link UP Apr 13 19:25:15.268898 systemd-networkd[1694]: cali4279c56c6d4: Gained carrier Apr 13 19:25:15.279941 containerd[2133]: time="2026-04-13T19:25:15.274541182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:15.287776 containerd[2133]: time="2026-04-13T19:25:15.279810130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:15.287776 containerd[2133]: time="2026-04-13T19:25:15.286088494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.287776 containerd[2133]: time="2026-04-13T19:25:15.286310542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:13.616 [ERROR][4945] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:13.795 [INFO][4945] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0 goldmane-5b85766d88- calico-system 76e86138-05a0-4187-9e4b-c73d50410649 928 0 2026-04-13 19:24:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-195 goldmane-5b85766d88-kpdfg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4279c56c6d4 [] [] }} ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:13.797 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:14.385 [INFO][5105] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" HandleID="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:14.472 [INFO][5105] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" HandleID="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039cf20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-195", "pod":"goldmane-5b85766d88-kpdfg", "timestamp":"2026-04-13 19:25:14.385603354 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400013f340)} Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:14.472 [INFO][5105] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.011 [INFO][5105] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.011 [INFO][5105] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.019 [INFO][5105] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.046 [INFO][5105] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.111 [INFO][5105] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.125 [INFO][5105] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.137 [INFO][5105] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.137 [INFO][5105] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.147 [INFO][5105] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734 Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.165 [INFO][5105] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.205 [INFO][5105] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.132/26] block=192.168.97.128/26 handle="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.207 [INFO][5105] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.132/26] handle="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" host="ip-172-31-26-195" Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.210 [INFO][5105] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:15.344798 containerd[2133]: 2026-04-13 19:25:15.211 [INFO][5105] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.132/26] IPv6=[] ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" HandleID="k8s-pod-network.67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:15.347624 containerd[2133]: 2026-04-13 19:25:15.261 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76e86138-05a0-4187-9e4b-c73d50410649", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"goldmane-5b85766d88-kpdfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4279c56c6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.347624 containerd[2133]: 2026-04-13 19:25:15.263 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.132/32] ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:15.347624 containerd[2133]: 2026-04-13 19:25:15.263 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4279c56c6d4 ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:15.347624 containerd[2133]: 2026-04-13 19:25:15.270 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:15.347624 containerd[2133]: 2026-04-13 19:25:15.271 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76e86138-05a0-4187-9e4b-c73d50410649", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734", Pod:"goldmane-5b85766d88-kpdfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4279c56c6d4", MAC:"6e:f5:91:0a:50:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.347624 containerd[2133]: 2026-04-13 19:25:15.309 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734" Namespace="calico-system" Pod="goldmane-5b85766d88-kpdfg" WorkloadEndpoint="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:15.366554 containerd[2133]: time="2026-04-13T19:25:15.360993275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:15.366554 containerd[2133]: time="2026-04-13T19:25:15.361194959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:15.366554 containerd[2133]: time="2026-04-13T19:25:15.361267127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.366554 containerd[2133]: time="2026-04-13T19:25:15.361902347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.475541 containerd[2133]: time="2026-04-13T19:25:15.456279503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:15.475541 containerd[2133]: time="2026-04-13T19:25:15.456464915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:15.475541 containerd[2133]: time="2026-04-13T19:25:15.456503999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.475541 containerd[2133]: time="2026-04-13T19:25:15.456705227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.479949 kubelet[3617]: I0413 19:25:15.473537 3617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:15.561789 systemd-networkd[1694]: cali7f6e7df96df: Link UP Apr 13 19:25:15.568382 systemd-networkd[1694]: cali7f6e7df96df: Gained carrier Apr 13 19:25:15.585439 kubelet[3617]: I0413 19:25:15.581827 3617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:13.626 [ERROR][4928] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:13.769 [INFO][4928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0 csi-node-driver- calico-system d14c8f4d-16d8-4d7e-83af-5e5a012516fe 925 0 2026-04-13 19:24:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-195 csi-node-driver-4pjcq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7f6e7df96df [] [] }} ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:13.769 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:14.608 [INFO][5097] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" HandleID="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:14.728 [INFO][5097] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" HandleID="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e2120), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-195", "pod":"csi-node-driver-4pjcq", "timestamp":"2026-04-13 19:25:14.608316815 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000e02c0)} Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:14.741 [INFO][5097] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.213 [INFO][5097] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.213 [INFO][5097] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.231 [INFO][5097] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.279 [INFO][5097] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.323 [INFO][5097] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.340 [INFO][5097] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.349 [INFO][5097] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.351 [INFO][5097] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.371 [INFO][5097] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37 Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.398 [INFO][5097] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.433 [INFO][5097] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.133/26] block=192.168.97.128/26 handle="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.436 [INFO][5097] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.133/26] handle="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" host="ip-172-31-26-195" Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.436 [INFO][5097] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:15.778427 containerd[2133]: 2026-04-13 19:25:15.436 [INFO][5097] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.133/26] IPv6=[] ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" HandleID="k8s-pod-network.ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:15.788023 containerd[2133]: 2026-04-13 19:25:15.509 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d14c8f4d-16d8-4d7e-83af-5e5a012516fe", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"csi-node-driver-4pjcq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f6e7df96df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.788023 containerd[2133]: 2026-04-13 19:25:15.509 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.133/32] ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:15.788023 containerd[2133]: 2026-04-13 19:25:15.509 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f6e7df96df ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:15.788023 containerd[2133]: 2026-04-13 19:25:15.567 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:15.788023 containerd[2133]: 2026-04-13 19:25:15.574 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d14c8f4d-16d8-4d7e-83af-5e5a012516fe", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37", Pod:"csi-node-driver-4pjcq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f6e7df96df", MAC:"6e:ab:a1:ba:39:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:15.788023 containerd[2133]: 2026-04-13 19:25:15.727 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37" Namespace="calico-system" Pod="csi-node-driver-4pjcq" WorkloadEndpoint="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:15.922438 containerd[2133]: time="2026-04-13T19:25:15.919527853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:15.922438 containerd[2133]: time="2026-04-13T19:25:15.919639789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:15.931749 containerd[2133]: time="2026-04-13T19:25:15.919676713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.945955 containerd[2133]: time="2026-04-13T19:25:15.937441394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.970323 systemd-networkd[1694]: cali04ef6bc344b: Link UP Apr 13 19:25:15.971908 systemd-networkd[1694]: cali04ef6bc344b: Gained carrier Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:13.626 [ERROR][4959] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:13.776 [INFO][4959] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0 calico-apiserver-5d9697dc4b- calico-system 702af813-e895-432b-a737-e7ecba2b6103 926 0 2026-04-13 19:24:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d9697dc4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-195 calico-apiserver-5d9697dc4b-j7tsc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali04ef6bc344b [] [] }} ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:13.780 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:14.599 [INFO][5090] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" HandleID="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:14.814 [INFO][5090] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" HandleID="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ac3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-195", "pod":"calico-apiserver-5d9697dc4b-j7tsc", "timestamp":"2026-04-13 19:25:14.599815847 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000296000)} Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:14.839 [INFO][5090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.443 [INFO][5090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.443 [INFO][5090] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.459 [INFO][5090] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.502 [INFO][5090] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.608 [INFO][5090] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.632 [INFO][5090] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.661 [INFO][5090] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.661 [INFO][5090] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.699 [INFO][5090] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369 Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.753 [INFO][5090] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.813 [INFO][5090] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.134/26] block=192.168.97.128/26 handle="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.813 [INFO][5090] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.134/26] handle="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" host="ip-172-31-26-195" Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.813 [INFO][5090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:16.076661 containerd[2133]: 2026-04-13 19:25:15.813 [INFO][5090] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.134/26] IPv6=[] ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" HandleID="k8s-pod-network.65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:16.077968 containerd[2133]: 2026-04-13 19:25:15.910 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"702af813-e895-432b-a737-e7ecba2b6103", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"calico-apiserver-5d9697dc4b-j7tsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04ef6bc344b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:16.077968 containerd[2133]: 2026-04-13 19:25:15.929 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.134/32] ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:16.077968 containerd[2133]: 2026-04-13 19:25:15.932 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04ef6bc344b ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:16.077968 containerd[2133]: 2026-04-13 19:25:15.974 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:16.077968 containerd[2133]: 2026-04-13 19:25:15.976 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"702af813-e895-432b-a737-e7ecba2b6103", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369", Pod:"calico-apiserver-5d9697dc4b-j7tsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04ef6bc344b", MAC:"2a:b9:21:af:68:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:16.077968 containerd[2133]: 2026-04-13 19:25:16.036 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369" Namespace="calico-system" Pod="calico-apiserver-5d9697dc4b-j7tsc" WorkloadEndpoint="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:16.116713 containerd[2133]: time="2026-04-13T19:25:16.105974434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:16.116713 containerd[2133]: time="2026-04-13T19:25:16.106097158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:16.116713 containerd[2133]: time="2026-04-13T19:25:16.106133902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:16.116713 containerd[2133]: time="2026-04-13T19:25:16.106325554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:16.134385 systemd-networkd[1694]: cali48fbb858980: Link UP Apr 13 19:25:16.138576 systemd-networkd[1694]: cali48fbb858980: Gained carrier Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:13.877 [ERROR][4972] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:14.219 [INFO][4972] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0 coredns-674b8bbfcf- kube-system a02eceba-5363-4cf1-87ce-7f671e2cd0cc 924 0 2026-04-13 19:24:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-195 coredns-674b8bbfcf-jpbcs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48fbb858980 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:14.241 [INFO][4972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:14.854 [INFO][5123] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" HandleID="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:14.959 [INFO][5123] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" HandleID="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400010dc50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-195", "pod":"coredns-674b8bbfcf-jpbcs", "timestamp":"2026-04-13 19:25:14.85463466 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000546f20)} Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:14.959 [INFO][5123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:15.817 [INFO][5123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:15.818 [INFO][5123] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:15.838 [INFO][5123] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:15.908 [INFO][5123] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:15.930 [INFO][5123] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:15.961 [INFO][5123] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.033 [INFO][5123] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.033 [INFO][5123] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.041 [INFO][5123] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.068 [INFO][5123] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.088 [INFO][5123] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.135/26] block=192.168.97.128/26 handle="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.088 [INFO][5123] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.135/26] handle="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" host="ip-172-31-26-195" Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.088 [INFO][5123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:16.215587 containerd[2133]: 2026-04-13 19:25:16.088 [INFO][5123] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.135/26] IPv6=[] ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" HandleID="k8s-pod-network.b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:16.217748 containerd[2133]: 2026-04-13 19:25:16.122 [INFO][4972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a02eceba-5363-4cf1-87ce-7f671e2cd0cc", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"coredns-674b8bbfcf-jpbcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48fbb858980", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:16.217748 containerd[2133]: 2026-04-13 19:25:16.127 [INFO][4972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.135/32] ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:16.217748 containerd[2133]: 2026-04-13 19:25:16.127 [INFO][4972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48fbb858980 ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:16.217748 containerd[2133]: 2026-04-13 19:25:16.139 [INFO][4972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:16.217748 containerd[2133]: 2026-04-13 19:25:16.143 [INFO][4972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a02eceba-5363-4cf1-87ce-7f671e2cd0cc", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd", Pod:"coredns-674b8bbfcf-jpbcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48fbb858980", MAC:"de:c0:96:f4:56:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:16.217748 containerd[2133]: 2026-04-13 19:25:16.186 [INFO][4972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpbcs" WorkloadEndpoint="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:16.234796 systemd-networkd[1694]: cali53dfae7814d: Gained IPv6LL Apr 13 19:25:16.301472 containerd[2133]: time="2026-04-13T19:25:16.301402403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rzcwq,Uid:c8a1f799-8088-43db-b33b-f83deb990843,Namespace:kube-system,Attempt:1,} returns sandbox id \"7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354\"" Apr 13 19:25:16.314059 systemd[1]: run-containerd-runc-k8s.io-67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734-runc.N4q1Ra.mount: Deactivated successfully. Apr 13 19:25:16.362986 containerd[2133]: time="2026-04-13T19:25:16.362900148Z" level=info msg="CreateContainer within sandbox \"7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:16.395084 containerd[2133]: time="2026-04-13T19:25:16.395010216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-kpdfg,Uid:76e86138-05a0-4187-9e4b-c73d50410649,Namespace:calico-system,Attempt:1,} returns sandbox id \"67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734\"" Apr 13 19:25:16.449043 containerd[2133]: time="2026-04-13T19:25:16.448989120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 19:25:16.490184 systemd-networkd[1694]: cali729c491e1d3: Gained IPv6LL Apr 13 19:25:16.572513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606893226.mount: Deactivated successfully. Apr 13 19:25:16.684276 containerd[2133]: time="2026-04-13T19:25:16.681605221Z" level=info msg="CreateContainer within sandbox \"7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1801b60b5144d8440859e66937d9c3591d41b58168d45ed3a708c5fb5df6b91d\"" Apr 13 19:25:16.682134 systemd-networkd[1694]: cali4279c56c6d4: Gained IPv6LL Apr 13 19:25:16.704896 containerd[2133]: time="2026-04-13T19:25:16.667697953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:16.704896 containerd[2133]: time="2026-04-13T19:25:16.667804657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:16.704896 containerd[2133]: time="2026-04-13T19:25:16.667831693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:16.704896 containerd[2133]: time="2026-04-13T19:25:16.674804689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:16.741678 containerd[2133]: time="2026-04-13T19:25:16.740418277Z" level=info msg="StartContainer for \"1801b60b5144d8440859e66937d9c3591d41b58168d45ed3a708c5fb5df6b91d\"" Apr 13 19:25:16.759090 containerd[2133]: time="2026-04-13T19:25:16.725501845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:16.759090 containerd[2133]: time="2026-04-13T19:25:16.725595181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:16.759090 containerd[2133]: time="2026-04-13T19:25:16.725624029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:16.759090 containerd[2133]: time="2026-04-13T19:25:16.746760290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:16.771323 systemd-networkd[1694]: cali2e48c4bfbd1: Link UP Apr 13 19:25:16.775265 systemd-networkd[1694]: cali2e48c4bfbd1: Gained carrier Apr 13 19:25:16.814722 containerd[2133]: time="2026-04-13T19:25:16.814178150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7949b6b746-86tp4,Uid:49aa872e-1beb-42e6-8cd8-546921069c20,Namespace:calico-system,Attempt:1,} returns sandbox id \"62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054\"" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:15.349 [ERROR][5151] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:15.440 [INFO][5151] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0 whisker-79f47f5d66- calico-system 5e8dac51-94d4-4e39-aa6a-5d64352e0aa0 951 0 2026-04-13 19:25:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79f47f5d66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-195 whisker-79f47f5d66-bmfqd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2e48c4bfbd1 [] [] }} ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:15.441 [INFO][5151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.285 [INFO][5277] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" HandleID="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Workload="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.385 [INFO][5277] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" HandleID="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Workload="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121df0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-195", "pod":"whisker-79f47f5d66-bmfqd", "timestamp":"2026-04-13 19:25:16.285249827 +0000 UTC"}, Hostname:"ip-172-31-26-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001669a0)} Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.396 [INFO][5277] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.396 [INFO][5277] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.397 [INFO][5277] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-195' Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.445 [INFO][5277] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.515 [INFO][5277] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.569 [INFO][5277] ipam/ipam.go 526: Trying affinity for 192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.590 [INFO][5277] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.620 [INFO][5277] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.626 [INFO][5277] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.633 [INFO][5277] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.672 [INFO][5277] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.732 [INFO][5277] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.136/26] block=192.168.97.128/26 handle="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.732 [INFO][5277] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.136/26] handle="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" host="ip-172-31-26-195" Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.732 [INFO][5277] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:16.862395 containerd[2133]: 2026-04-13 19:25:16.737 [INFO][5277] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.136/26] IPv6=[] ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" HandleID="k8s-pod-network.f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Workload="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" Apr 13 19:25:16.864699 containerd[2133]: 2026-04-13 19:25:16.750 [INFO][5151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0", GenerateName:"whisker-79f47f5d66-", Namespace:"calico-system", SelfLink:"", UID:"5e8dac51-94d4-4e39-aa6a-5d64352e0aa0", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f47f5d66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"", Pod:"whisker-79f47f5d66-bmfqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e48c4bfbd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:16.864699 containerd[2133]: 2026-04-13 19:25:16.750 [INFO][5151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.136/32] ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" Apr 13 19:25:16.864699 containerd[2133]: 2026-04-13 19:25:16.752 [INFO][5151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e48c4bfbd1 ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" Apr 13 19:25:16.864699 containerd[2133]: 2026-04-13 19:25:16.770 [INFO][5151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" Apr 13 19:25:16.864699 containerd[2133]: 2026-04-13 19:25:16.771 [INFO][5151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0", GenerateName:"whisker-79f47f5d66-", Namespace:"calico-system", SelfLink:"", UID:"5e8dac51-94d4-4e39-aa6a-5d64352e0aa0", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 25, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f47f5d66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c", Pod:"whisker-79f47f5d66-bmfqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e48c4bfbd1", MAC:"b6:f4:e9:f6:cb:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:16.864699 containerd[2133]: 2026-04-13 19:25:16.822 [INFO][5151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c" Namespace="calico-system" Pod="whisker-79f47f5d66-bmfqd" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--79f47f5d66--bmfqd-eth0" Apr 13 19:25:16.874060 systemd-networkd[1694]: cali666bb30349a: Gained IPv6LL Apr 13 19:25:16.877170 systemd-networkd[1694]: cali7f6e7df96df: Gained IPv6LL Apr 13 19:25:16.925835 containerd[2133]: time="2026-04-13T19:25:16.925626110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pjcq,Uid:d14c8f4d-16d8-4d7e-83af-5e5a012516fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37\"" Apr 13 19:25:16.971496 containerd[2133]: time="2026-04-13T19:25:16.971297859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-6kgd9,Uid:69ebf535-2663-4af7-9297-fd6777511804,Namespace:calico-system,Attempt:1,} returns sandbox id \"430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f\"" Apr 13 19:25:17.103370 containerd[2133]: time="2026-04-13T19:25:17.101335775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:17.103370 containerd[2133]: time="2026-04-13T19:25:17.101511335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:17.108428 containerd[2133]: time="2026-04-13T19:25:17.103148543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:17.108428 containerd[2133]: time="2026-04-13T19:25:17.107846771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:17.205379 containerd[2133]: time="2026-04-13T19:25:17.204943620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpbcs,Uid:a02eceba-5363-4cf1-87ce-7f671e2cd0cc,Namespace:kube-system,Attempt:1,} returns sandbox id \"b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd\"" Apr 13 19:25:17.353114 containerd[2133]: time="2026-04-13T19:25:17.350935177Z" level=info msg="CreateContainer within sandbox \"b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:17.353114 containerd[2133]: time="2026-04-13T19:25:17.351795673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9697dc4b-j7tsc,Uid:702af813-e895-432b-a737-e7ecba2b6103,Namespace:calico-system,Attempt:1,} returns sandbox id \"65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369\"" Apr 13 19:25:17.438200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988977484.mount: Deactivated successfully. Apr 13 19:25:17.467478 containerd[2133]: time="2026-04-13T19:25:17.466664629Z" level=info msg="CreateContainer within sandbox \"b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8243b701f15194c7af3a630fda360b8859dc7bf8243ba1fe57f0c0db409ec9ae\"" Apr 13 19:25:17.470737 containerd[2133]: time="2026-04-13T19:25:17.469587301Z" level=info msg="StartContainer for \"8243b701f15194c7af3a630fda360b8859dc7bf8243ba1fe57f0c0db409ec9ae\"" Apr 13 19:25:17.482128 containerd[2133]: time="2026-04-13T19:25:17.481955677Z" level=info msg="StartContainer for \"1801b60b5144d8440859e66937d9c3591d41b58168d45ed3a708c5fb5df6b91d\" returns successfully" Apr 13 19:25:17.581246 systemd-networkd[1694]: cali04ef6bc344b: Gained IPv6LL Apr 13 19:25:17.597451 containerd[2133]: time="2026-04-13T19:25:17.596506418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f47f5d66-bmfqd,Uid:5e8dac51-94d4-4e39-aa6a-5d64352e0aa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c\"" Apr 13 19:25:17.614728 kernel: calico-node[5478]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 19:25:17.785888 containerd[2133]: time="2026-04-13T19:25:17.785198487Z" level=info msg="StartContainer for \"8243b701f15194c7af3a630fda360b8859dc7bf8243ba1fe57f0c0db409ec9ae\" returns successfully" Apr 13 19:25:17.837835 systemd-journald[1609]: Under memory pressure, flushing caches. Apr 13 19:25:17.834837 systemd-resolved[2026]: Under memory pressure, flushing caches. Apr 13 19:25:17.834907 systemd-resolved[2026]: Flushed all caches. Apr 13 19:25:17.839061 systemd-networkd[1694]: cali48fbb858980: Gained IPv6LL Apr 13 19:25:18.409833 systemd-networkd[1694]: cali2e48c4bfbd1: Gained IPv6LL Apr 13 19:25:18.501556 kubelet[3617]: I0413 19:25:18.499554 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rzcwq" podStartSLOduration=49.499526006 podStartE2EDuration="49.499526006s" podCreationTimestamp="2026-04-13 19:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:18.496210886 +0000 UTC m=+56.142281872" watchObservedRunningTime="2026-04-13 19:25:18.499526006 +0000 UTC m=+56.145596980" Apr 13 19:25:18.531552 kubelet[3617]: I0413 19:25:18.531383 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jpbcs" podStartSLOduration=49.531358466 podStartE2EDuration="49.531358466s" podCreationTimestamp="2026-04-13 19:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:18.528800846 +0000 UTC m=+56.174871856" watchObservedRunningTime="2026-04-13 19:25:18.531358466 +0000 UTC m=+56.177429452" Apr 13 19:25:18.880768 systemd-networkd[1694]: vxlan.calico: Link UP Apr 13 19:25:18.880792 systemd-networkd[1694]: vxlan.calico: Gained carrier Apr 13 19:25:18.964034 (udev-worker)[5142]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:20.086004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312662036.mount: Deactivated successfully. Apr 13 19:25:20.457150 systemd-networkd[1694]: vxlan.calico: Gained IPv6LL Apr 13 19:25:20.708186 containerd[2133]: time="2026-04-13T19:25:20.708122057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.712369 containerd[2133]: time="2026-04-13T19:25:20.711929561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 19:25:20.714457 containerd[2133]: time="2026-04-13T19:25:20.714382277Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.726590 containerd[2133]: time="2026-04-13T19:25:20.726413957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.729654 containerd[2133]: time="2026-04-13T19:25:20.728917565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 4.275901533s" Apr 13 19:25:20.729654 containerd[2133]: time="2026-04-13T19:25:20.728983493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 19:25:20.731457 containerd[2133]: time="2026-04-13T19:25:20.731385029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 19:25:20.738427 containerd[2133]: time="2026-04-13T19:25:20.738377393Z" level=info msg="CreateContainer within sandbox \"67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 19:25:20.766779 containerd[2133]: time="2026-04-13T19:25:20.766722737Z" level=info msg="CreateContainer within sandbox \"67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9b96a94eb87e8d5ea9eb439652c128d23015e4e8cbb8a2fffc14d1dcc06d4aa1\"" Apr 13 19:25:20.773980 containerd[2133]: time="2026-04-13T19:25:20.773911590Z" level=info msg="StartContainer for \"9b96a94eb87e8d5ea9eb439652c128d23015e4e8cbb8a2fffc14d1dcc06d4aa1\"" Apr 13 19:25:20.977223 containerd[2133]: time="2026-04-13T19:25:20.976927375Z" level=info msg="StartContainer for \"9b96a94eb87e8d5ea9eb439652c128d23015e4e8cbb8a2fffc14d1dcc06d4aa1\" returns successfully" Apr 13 19:25:22.480211 ntpd[2088]: Listen normally on 6 vxlan.calico 192.168.97.128:123 Apr 13 19:25:22.480345 ntpd[2088]: Listen normally on 7 cali53dfae7814d [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 6 vxlan.calico 192.168.97.128:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 7 cali53dfae7814d [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 8 cali729c491e1d3 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 9 cali666bb30349a [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 10 cali4279c56c6d4 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 11 cali7f6e7df96df [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 12 cali04ef6bc344b [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 13 cali48fbb858980 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 14 cali2e48c4bfbd1 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 19:25:22.482655 ntpd[2088]: 13 Apr 19:25:22 ntpd[2088]: Listen normally on 15 vxlan.calico [fe80::6410:abff:fe82:39bd%12]:123 Apr 13 19:25:22.480429 ntpd[2088]: Listen normally on 8 cali729c491e1d3 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 19:25:22.480496 ntpd[2088]: Listen normally on 9 cali666bb30349a [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 19:25:22.480562 ntpd[2088]: Listen normally on 10 cali4279c56c6d4 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 19:25:22.480630 ntpd[2088]: Listen normally on 11 cali7f6e7df96df [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 19:25:22.480734 ntpd[2088]: Listen normally on 12 cali04ef6bc344b [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 19:25:22.480808 ntpd[2088]: Listen normally on 13 cali48fbb858980 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 19:25:22.480896 ntpd[2088]: Listen normally on 14 cali2e48c4bfbd1 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 19:25:22.480975 ntpd[2088]: Listen normally on 15 vxlan.calico [fe80::6410:abff:fe82:39bd%12]:123 Apr 13 19:25:22.615089 containerd[2133]: time="2026-04-13T19:25:22.615020623Z" level=info msg="StopPodSandbox for \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\"" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.737 [WARNING][5950] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8a1f799-8088-43db-b33b-f83deb990843", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354", Pod:"coredns-674b8bbfcf-rzcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53dfae7814d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.738 [INFO][5950] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.738 [INFO][5950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" iface="eth0" netns="" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.738 [INFO][5950] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.738 [INFO][5950] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.795 [INFO][5960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.795 [INFO][5960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.795 [INFO][5960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.815 [WARNING][5960] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.816 [INFO][5960] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.822 [INFO][5960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:22.840727 containerd[2133]: 2026-04-13 19:25:22.830 [INFO][5950] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:22.840727 containerd[2133]: time="2026-04-13T19:25:22.840531620Z" level=info msg="TearDown network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\" successfully" Apr 13 19:25:22.840727 containerd[2133]: time="2026-04-13T19:25:22.840569912Z" level=info msg="StopPodSandbox for \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\" returns successfully" Apr 13 19:25:22.841736 containerd[2133]: time="2026-04-13T19:25:22.841556048Z" level=info msg="RemovePodSandbox for \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\"" Apr 13 19:25:22.841736 containerd[2133]: time="2026-04-13T19:25:22.841609916Z" level=info msg="Forcibly stopping sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\"" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.925 [WARNING][5975] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8a1f799-8088-43db-b33b-f83deb990843", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"7e4429b6704a256941df404c4720b388dbedddaeb50d9979ff8aa50fb857a354", Pod:"coredns-674b8bbfcf-rzcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53dfae7814d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.926 [INFO][5975] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.926 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" iface="eth0" netns="" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.926 [INFO][5975] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.926 [INFO][5975] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.977 [INFO][5982] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.977 [INFO][5982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.977 [INFO][5982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.993 [WARNING][5982] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.993 [INFO][5982] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" HandleID="k8s-pod-network.36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--rzcwq-eth0" Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:22.996 [INFO][5982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.008262 containerd[2133]: 2026-04-13 19:25:23.000 [INFO][5975] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db" Apr 13 19:25:23.008262 containerd[2133]: time="2026-04-13T19:25:23.008071577Z" level=info msg="TearDown network for sandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\" successfully" Apr 13 19:25:23.022145 containerd[2133]: time="2026-04-13T19:25:23.015076097Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:23.022145 containerd[2133]: time="2026-04-13T19:25:23.015189101Z" level=info msg="RemovePodSandbox \"36b6004931fc0126f359ce44a0812dc764dd5e3fdd0c16e74acbec54131b80db\" returns successfully" Apr 13 19:25:23.022145 containerd[2133]: time="2026-04-13T19:25:23.017643209Z" level=info msg="StopPodSandbox for \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\"" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.102 [WARNING][5997] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0", GenerateName:"calico-kube-controllers-7949b6b746-", Namespace:"calico-system", SelfLink:"", UID:"49aa872e-1beb-42e6-8cd8-546921069c20", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7949b6b746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054", Pod:"calico-kube-controllers-7949b6b746-86tp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali729c491e1d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.103 [INFO][5997] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.103 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" iface="eth0" netns="" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.103 [INFO][5997] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.103 [INFO][5997] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.178 [INFO][6004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.178 [INFO][6004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.178 [INFO][6004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.196 [WARNING][6004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.197 [INFO][6004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.201 [INFO][6004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.213820 containerd[2133]: 2026-04-13 19:25:23.208 [INFO][5997] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.213820 containerd[2133]: time="2026-04-13T19:25:23.212453382Z" level=info msg="TearDown network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\" successfully" Apr 13 19:25:23.213820 containerd[2133]: time="2026-04-13T19:25:23.212492730Z" level=info msg="StopPodSandbox for \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\" returns successfully" Apr 13 19:25:23.213820 containerd[2133]: time="2026-04-13T19:25:23.213326010Z" level=info msg="RemovePodSandbox for \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\"" Apr 13 19:25:23.213820 containerd[2133]: time="2026-04-13T19:25:23.213414270Z" level=info msg="Forcibly stopping sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\"" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.322 [WARNING][6022] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0", GenerateName:"calico-kube-controllers-7949b6b746-", Namespace:"calico-system", SelfLink:"", UID:"49aa872e-1beb-42e6-8cd8-546921069c20", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7949b6b746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054", Pod:"calico-kube-controllers-7949b6b746-86tp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali729c491e1d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.324 [INFO][6022] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.325 [INFO][6022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" iface="eth0" netns="" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.325 [INFO][6022] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.326 [INFO][6022] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.441 [INFO][6029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.442 [INFO][6029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.442 [INFO][6029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.460 [WARNING][6029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.460 [INFO][6029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" HandleID="k8s-pod-network.aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Workload="ip--172--31--26--195-k8s-calico--kube--controllers--7949b6b746--86tp4-eth0" Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.463 [INFO][6029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.480319 containerd[2133]: 2026-04-13 19:25:23.471 [INFO][6022] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671" Apr 13 19:25:23.480319 containerd[2133]: time="2026-04-13T19:25:23.479306683Z" level=info msg="TearDown network for sandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\" successfully" Apr 13 19:25:23.492012 containerd[2133]: time="2026-04-13T19:25:23.491440675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:23.492012 containerd[2133]: time="2026-04-13T19:25:23.491539315Z" level=info msg="RemovePodSandbox \"aa502928c1cb7bcb49413f4751be25c5c6eb5fee890aaf28629281f7deb95671\" returns successfully" Apr 13 19:25:23.493297 containerd[2133]: time="2026-04-13T19:25:23.492626791Z" level=info msg="StopPodSandbox for \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\"" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.683 [WARNING][6046] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"69ebf535-2663-4af7-9297-fd6777511804", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f", Pod:"calico-apiserver-5d9697dc4b-6kgd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali666bb30349a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.683 [INFO][6046] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.683 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" iface="eth0" netns="" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.684 [INFO][6046] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.684 [INFO][6046] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.754 [INFO][6074] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.754 [INFO][6074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.755 [INFO][6074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.774 [WARNING][6074] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.775 [INFO][6074] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.777 [INFO][6074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.788513 containerd[2133]: 2026-04-13 19:25:23.781 [INFO][6046] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.790604 containerd[2133]: time="2026-04-13T19:25:23.790184997Z" level=info msg="TearDown network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\" successfully" Apr 13 19:25:23.790604 containerd[2133]: time="2026-04-13T19:25:23.790237821Z" level=info msg="StopPodSandbox for \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\" returns successfully" Apr 13 19:25:23.791161 containerd[2133]: time="2026-04-13T19:25:23.790883973Z" level=info msg="RemovePodSandbox for \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\"" Apr 13 19:25:23.791161 containerd[2133]: time="2026-04-13T19:25:23.790930005Z" level=info msg="Forcibly stopping sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\"" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.894 [WARNING][6088] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"69ebf535-2663-4af7-9297-fd6777511804", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f", Pod:"calico-apiserver-5d9697dc4b-6kgd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali666bb30349a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.895 [INFO][6088] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.895 [INFO][6088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" iface="eth0" netns="" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.895 [INFO][6088] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.895 [INFO][6088] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.944 [INFO][6095] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.944 [INFO][6095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.944 [INFO][6095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.960 [WARNING][6095] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.960 [INFO][6095] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" HandleID="k8s-pod-network.fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--6kgd9-eth0" Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.964 [INFO][6095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.974188 containerd[2133]: 2026-04-13 19:25:23.969 [INFO][6088] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05" Apr 13 19:25:23.975100 containerd[2133]: time="2026-04-13T19:25:23.974255085Z" level=info msg="TearDown network for sandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\" successfully" Apr 13 19:25:23.979779 containerd[2133]: time="2026-04-13T19:25:23.979335477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:23.979779 containerd[2133]: time="2026-04-13T19:25:23.979452261Z" level=info msg="RemovePodSandbox \"fbff208246215529031519f5024063ac7ee84246b4627af3cc1e24df0ec69b05\" returns successfully" Apr 13 19:25:23.980731 containerd[2133]: time="2026-04-13T19:25:23.980662473Z" level=info msg="StopPodSandbox for \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\"" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.102 [WARNING][6110] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"702af813-e895-432b-a737-e7ecba2b6103", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369", Pod:"calico-apiserver-5d9697dc4b-j7tsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04ef6bc344b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.102 [INFO][6110] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.103 [INFO][6110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" iface="eth0" netns="" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.103 [INFO][6110] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.103 [INFO][6110] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.172 [INFO][6118] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.172 [INFO][6118] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.172 [INFO][6118] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.196 [WARNING][6118] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.196 [INFO][6118] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.201 [INFO][6118] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.213159 containerd[2133]: 2026-04-13 19:25:24.207 [INFO][6110] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.214029 containerd[2133]: time="2026-04-13T19:25:24.213203659Z" level=info msg="TearDown network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\" successfully" Apr 13 19:25:24.214029 containerd[2133]: time="2026-04-13T19:25:24.213244303Z" level=info msg="StopPodSandbox for \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\" returns successfully" Apr 13 19:25:24.215043 containerd[2133]: time="2026-04-13T19:25:24.214251271Z" level=info msg="RemovePodSandbox for \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\"" Apr 13 19:25:24.215043 containerd[2133]: time="2026-04-13T19:25:24.214310911Z" level=info msg="Forcibly stopping sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\"" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.304 [WARNING][6132] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0", GenerateName:"calico-apiserver-5d9697dc4b-", Namespace:"calico-system", SelfLink:"", UID:"702af813-e895-432b-a737-e7ecba2b6103", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9697dc4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369", Pod:"calico-apiserver-5d9697dc4b-j7tsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali04ef6bc344b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.305 [INFO][6132] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.305 [INFO][6132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" iface="eth0" netns="" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.305 [INFO][6132] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.305 [INFO][6132] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.369 [INFO][6139] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.370 [INFO][6139] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.370 [INFO][6139] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.391 [WARNING][6139] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.391 [INFO][6139] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" HandleID="k8s-pod-network.514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Workload="ip--172--31--26--195-k8s-calico--apiserver--5d9697dc4b--j7tsc-eth0" Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.397 [INFO][6139] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.407780 containerd[2133]: 2026-04-13 19:25:24.403 [INFO][6132] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69" Apr 13 19:25:24.407780 containerd[2133]: time="2026-04-13T19:25:24.407715896Z" level=info msg="TearDown network for sandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\" successfully" Apr 13 19:25:24.413593 containerd[2133]: time="2026-04-13T19:25:24.413513288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:24.414316 containerd[2133]: time="2026-04-13T19:25:24.413631344Z" level=info msg="RemovePodSandbox \"514110ac7937a74d953fbbdc9cf2c144aedf91ce82b1ee651ab827063cf05a69\" returns successfully" Apr 13 19:25:24.414316 containerd[2133]: time="2026-04-13T19:25:24.414266900Z" level=info msg="StopPodSandbox for \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\"" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.520 [WARNING][6153] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d14c8f4d-16d8-4d7e-83af-5e5a012516fe", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37", Pod:"csi-node-driver-4pjcq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f6e7df96df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.523 [INFO][6153] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.523 [INFO][6153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" iface="eth0" netns="" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.523 [INFO][6153] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.523 [INFO][6153] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.596 [INFO][6160] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.597 [INFO][6160] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.597 [INFO][6160] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.621 [WARNING][6160] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.621 [INFO][6160] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.625 [INFO][6160] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.641828 containerd[2133]: 2026-04-13 19:25:24.632 [INFO][6153] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.641828 containerd[2133]: time="2026-04-13T19:25:24.641788581Z" level=info msg="TearDown network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\" successfully" Apr 13 19:25:24.641828 containerd[2133]: time="2026-04-13T19:25:24.641832765Z" level=info msg="StopPodSandbox for \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\" returns successfully" Apr 13 19:25:24.644419 containerd[2133]: time="2026-04-13T19:25:24.642710313Z" level=info msg="RemovePodSandbox for \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\"" Apr 13 19:25:24.644419 containerd[2133]: time="2026-04-13T19:25:24.643987677Z" level=info msg="Forcibly stopping sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\"" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.787 [WARNING][6174] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d14c8f4d-16d8-4d7e-83af-5e5a012516fe", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37", Pod:"csi-node-driver-4pjcq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f6e7df96df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.787 [INFO][6174] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.787 [INFO][6174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" iface="eth0" netns="" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.787 [INFO][6174] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.787 [INFO][6174] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.882 [INFO][6191] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.885 [INFO][6191] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.885 [INFO][6191] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.907 [WARNING][6191] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.907 [INFO][6191] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" HandleID="k8s-pod-network.8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Workload="ip--172--31--26--195-k8s-csi--node--driver--4pjcq-eth0" Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.911 [INFO][6191] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:24.923579 containerd[2133]: 2026-04-13 19:25:24.918 [INFO][6174] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61" Apr 13 19:25:24.926545 containerd[2133]: time="2026-04-13T19:25:24.925545922Z" level=info msg="TearDown network for sandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\" successfully" Apr 13 19:25:24.931917 containerd[2133]: time="2026-04-13T19:25:24.931851370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:24.932117 containerd[2133]: time="2026-04-13T19:25:24.931957066Z" level=info msg="RemovePodSandbox \"8fa92bd3fdafb0918db5a60627f3a4ff18ce1c5959f1038e1e9af78c26c24a61\" returns successfully" Apr 13 19:25:24.933189 containerd[2133]: time="2026-04-13T19:25:24.932976826Z" level=info msg="StopPodSandbox for \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\"" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.024 [WARNING][6209] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.024 [INFO][6209] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.024 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" iface="eth0" netns="" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.024 [INFO][6209] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.024 [INFO][6209] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.093 [INFO][6216] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.095 [INFO][6216] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.095 [INFO][6216] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.117 [WARNING][6216] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.117 [INFO][6216] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.120 [INFO][6216] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.131815 containerd[2133]: 2026-04-13 19:25:25.126 [INFO][6209] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.135498 containerd[2133]: time="2026-04-13T19:25:25.131863111Z" level=info msg="TearDown network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\" successfully" Apr 13 19:25:25.135498 containerd[2133]: time="2026-04-13T19:25:25.131905891Z" level=info msg="StopPodSandbox for \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\" returns successfully" Apr 13 19:25:25.135498 containerd[2133]: time="2026-04-13T19:25:25.133143271Z" level=info msg="RemovePodSandbox for \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\"" Apr 13 19:25:25.135498 containerd[2133]: time="2026-04-13T19:25:25.133387243Z" level=info msg="Forcibly stopping sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\"" Apr 13 19:25:25.368766 containerd[2133]: time="2026-04-13T19:25:25.368204288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.261 [WARNING][6230] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" WorkloadEndpoint="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.262 [INFO][6230] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.262 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" iface="eth0" netns="" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.262 [INFO][6230] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.262 [INFO][6230] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.330 [INFO][6238] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.331 [INFO][6238] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.331 [INFO][6238] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.355 [WARNING][6238] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.355 [INFO][6238] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" HandleID="k8s-pod-network.9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Workload="ip--172--31--26--195-k8s-whisker--65f5db84d5--mpt4b-eth0" Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.359 [INFO][6238] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.368936 containerd[2133]: 2026-04-13 19:25:25.363 [INFO][6230] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488" Apr 13 19:25:25.369528 containerd[2133]: time="2026-04-13T19:25:25.368940212Z" level=info msg="TearDown network for sandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\" successfully" Apr 13 19:25:25.372332 containerd[2133]: time="2026-04-13T19:25:25.371815700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 19:25:25.373139 containerd[2133]: time="2026-04-13T19:25:25.372657284Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:25.375917 containerd[2133]: time="2026-04-13T19:25:25.375847976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:25.376053 containerd[2133]: time="2026-04-13T19:25:25.375948080Z" level=info msg="RemovePodSandbox \"9ec5fe7a4b048adad1936c3276c2eb6fce62e1099e756efd3ba41473ec1c1488\" returns successfully" Apr 13 19:25:25.376851 containerd[2133]: time="2026-04-13T19:25:25.376576652Z" level=info msg="StopPodSandbox for \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\"" Apr 13 19:25:25.382725 containerd[2133]: time="2026-04-13T19:25:25.382214720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:25.384607 containerd[2133]: time="2026-04-13T19:25:25.384508400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.653042059s" Apr 13 19:25:25.384607 containerd[2133]: time="2026-04-13T19:25:25.384575600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 19:25:25.398978 containerd[2133]: time="2026-04-13T19:25:25.398908569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 19:25:25.445673 containerd[2133]: time="2026-04-13T19:25:25.445606845Z" level=info msg="CreateContainer within sandbox \"62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 19:25:25.478971 containerd[2133]: time="2026-04-13T19:25:25.478912365Z" level=info msg="CreateContainer within sandbox \"62ee799fa8ff28e7a0bb3187ba8c62c80df786b736f0518863605298db40f054\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"99821c3b87e0df845734d17bdfa39b2469d53e92e77f1b30a20f5e1ec372b2a7\"" Apr 13 19:25:25.480727 containerd[2133]: time="2026-04-13T19:25:25.479787081Z" level=info msg="StartContainer for \"99821c3b87e0df845734d17bdfa39b2469d53e92e77f1b30a20f5e1ec372b2a7\"" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.511 [WARNING][6252] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76e86138-05a0-4187-9e4b-c73d50410649", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734", Pod:"goldmane-5b85766d88-kpdfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4279c56c6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.511 [INFO][6252] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.511 [INFO][6252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" iface="eth0" netns="" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.511 [INFO][6252] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.511 [INFO][6252] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.590 [INFO][6270] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.591 [INFO][6270] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.591 [INFO][6270] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.605 [WARNING][6270] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.605 [INFO][6270] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.608 [INFO][6270] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.614847 containerd[2133]: 2026-04-13 19:25:25.611 [INFO][6252] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.614847 containerd[2133]: time="2026-04-13T19:25:25.614828206Z" level=info msg="TearDown network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\" successfully" Apr 13 19:25:25.615767 containerd[2133]: time="2026-04-13T19:25:25.614868382Z" level=info msg="StopPodSandbox for \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\" returns successfully" Apr 13 19:25:25.617132 containerd[2133]: time="2026-04-13T19:25:25.616983874Z" level=info msg="RemovePodSandbox for \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\"" Apr 13 19:25:25.617132 containerd[2133]: time="2026-04-13T19:25:25.617059318Z" level=info msg="Forcibly stopping sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\"" Apr 13 19:25:25.689912 containerd[2133]: time="2026-04-13T19:25:25.689625250Z" level=info msg="StartContainer for \"99821c3b87e0df845734d17bdfa39b2469d53e92e77f1b30a20f5e1ec372b2a7\" returns successfully" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.706 [WARNING][6304] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"76e86138-05a0-4187-9e4b-c73d50410649", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"67e2987b79838d5671009a03f035a4bcc8bcbfbe232d4b3adf41cc8452735734", Pod:"goldmane-5b85766d88-kpdfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4279c56c6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.707 [INFO][6304] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.707 [INFO][6304] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" iface="eth0" netns="" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.707 [INFO][6304] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.707 [INFO][6304] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.769 [INFO][6319] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.769 [INFO][6319] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.769 [INFO][6319] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.786 [WARNING][6319] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.786 [INFO][6319] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" HandleID="k8s-pod-network.d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Workload="ip--172--31--26--195-k8s-goldmane--5b85766d88--kpdfg-eth0" Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.789 [INFO][6319] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:25.796581 containerd[2133]: 2026-04-13 19:25:25.793 [INFO][6304] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79" Apr 13 19:25:25.798219 containerd[2133]: time="2026-04-13T19:25:25.796604374Z" level=info msg="TearDown network for sandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\" successfully" Apr 13 19:25:25.801329 containerd[2133]: time="2026-04-13T19:25:25.801261407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:25.801592 containerd[2133]: time="2026-04-13T19:25:25.801363983Z" level=info msg="RemovePodSandbox \"d5ffa5941af00fd610e8982e86fed2fa33afb696c631096966c4de9930c40c79\" returns successfully" Apr 13 19:25:25.802485 containerd[2133]: time="2026-04-13T19:25:25.802024823Z" level=info msg="StopPodSandbox for \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\"" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:25.907 [WARNING][6344] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a02eceba-5363-4cf1-87ce-7f671e2cd0cc", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd", Pod:"coredns-674b8bbfcf-jpbcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48fbb858980", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:25.908 [INFO][6344] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:25.908 [INFO][6344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" iface="eth0" netns="" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:25.908 [INFO][6344] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:25.908 [INFO][6344] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:26.000 [INFO][6351] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:26.000 [INFO][6351] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:26.000 [INFO][6351] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:26.015 [WARNING][6351] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:26.015 [INFO][6351] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:26.017 [INFO][6351] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:26.024018 containerd[2133]: 2026-04-13 19:25:26.020 [INFO][6344] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.026555 containerd[2133]: time="2026-04-13T19:25:26.024863420Z" level=info msg="TearDown network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\" successfully" Apr 13 19:25:26.026555 containerd[2133]: time="2026-04-13T19:25:26.024909752Z" level=info msg="StopPodSandbox for \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\" returns successfully" Apr 13 19:25:26.026555 containerd[2133]: time="2026-04-13T19:25:26.026886956Z" level=info msg="RemovePodSandbox for \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\"" Apr 13 19:25:26.026555 containerd[2133]: time="2026-04-13T19:25:26.026933396Z" level=info msg="Forcibly stopping sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\"" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.096 [WARNING][6367] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a02eceba-5363-4cf1-87ce-7f671e2cd0cc", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-195", ContainerID:"b0f7f22be253b7b1f57d70059cfa5b4cb75d2a5824f3dd8f436c4c4b94fb4cfd", Pod:"coredns-674b8bbfcf-jpbcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48fbb858980", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.097 [INFO][6367] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.097 [INFO][6367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" iface="eth0" netns="" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.097 [INFO][6367] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.097 [INFO][6367] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.145 [INFO][6374] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.145 [INFO][6374] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.145 [INFO][6374] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.162 [WARNING][6374] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.162 [INFO][6374] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" HandleID="k8s-pod-network.1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Workload="ip--172--31--26--195-k8s-coredns--674b8bbfcf--jpbcs-eth0" Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.168 [INFO][6374] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:26.176407 containerd[2133]: 2026-04-13 19:25:26.172 [INFO][6367] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a" Apr 13 19:25:26.177579 containerd[2133]: time="2026-04-13T19:25:26.176465336Z" level=info msg="TearDown network for sandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\" successfully" Apr 13 19:25:26.189570 containerd[2133]: time="2026-04-13T19:25:26.189500072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:26.189751 containerd[2133]: time="2026-04-13T19:25:26.189597920Z" level=info msg="RemovePodSandbox \"1a825dd9362100f316b5aa7116105ca1a5c3cffd0675a2ef00a9c34d4d36cb2a\" returns successfully" Apr 13 19:25:26.595289 kubelet[3617]: I0413 19:25:26.595174 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7949b6b746-86tp4" podStartSLOduration=27.103427064 podStartE2EDuration="35.595146334s" podCreationTimestamp="2026-04-13 19:24:51 +0000 UTC" firstStartedPulling="2026-04-13 19:25:16.903731402 +0000 UTC m=+54.549802364" lastFinishedPulling="2026-04-13 19:25:25.395450672 +0000 UTC m=+63.041521634" observedRunningTime="2026-04-13 19:25:26.58867879 +0000 UTC m=+64.234749788" watchObservedRunningTime="2026-04-13 19:25:26.595146334 +0000 UTC m=+64.241217332" Apr 13 19:25:26.597381 kubelet[3617]: I0413 19:25:26.597154 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-kpdfg" podStartSLOduration=34.291735749 podStartE2EDuration="38.597100162s" podCreationTimestamp="2026-04-13 19:24:48 +0000 UTC" firstStartedPulling="2026-04-13 19:25:16.425780136 +0000 UTC m=+54.071851110" lastFinishedPulling="2026-04-13 19:25:20.731144525 +0000 UTC m=+58.377215523" observedRunningTime="2026-04-13 19:25:21.518834813 +0000 UTC m=+59.164905799" watchObservedRunningTime="2026-04-13 19:25:26.597100162 +0000 UTC m=+64.243171136" Apr 13 19:25:26.626213 systemd[1]: run-containerd-runc-k8s.io-99821c3b87e0df845734d17bdfa39b2469d53e92e77f1b30a20f5e1ec372b2a7-runc.aljeQc.mount: Deactivated successfully. Apr 13 19:25:27.313663 containerd[2133]: time="2026-04-13T19:25:27.313609210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:27.317019 containerd[2133]: time="2026-04-13T19:25:27.316948258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 19:25:27.318088 containerd[2133]: time="2026-04-13T19:25:27.318000766Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:27.324117 containerd[2133]: time="2026-04-13T19:25:27.323756938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:27.325266 containerd[2133]: time="2026-04-13T19:25:27.325217206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.923554757s" Apr 13 19:25:27.325534 containerd[2133]: time="2026-04-13T19:25:27.325400770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 19:25:27.329492 containerd[2133]: time="2026-04-13T19:25:27.328049986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:25:27.335101 containerd[2133]: time="2026-04-13T19:25:27.335036926Z" level=info msg="CreateContainer within sandbox \"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 19:25:27.358031 containerd[2133]: time="2026-04-13T19:25:27.357967030Z" level=info msg="CreateContainer within sandbox \"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"78fb3d3615f39b4da3ca2830945970b6322ae7de9f7d4f03e8f0c87722852291\"" Apr 13 19:25:27.361411 containerd[2133]: time="2026-04-13T19:25:27.361227070Z" level=info msg="StartContainer for \"78fb3d3615f39b4da3ca2830945970b6322ae7de9f7d4f03e8f0c87722852291\"" Apr 13 19:25:27.504789 containerd[2133]: time="2026-04-13T19:25:27.504045443Z" level=info msg="StartContainer for \"78fb3d3615f39b4da3ca2830945970b6322ae7de9f7d4f03e8f0c87722852291\" returns successfully" Apr 13 19:25:28.472837 systemd[1]: Started sshd@7-172.31.26.195:22-4.175.71.9:34766.service - OpenSSH per-connection server daemon (4.175.71.9:34766). Apr 13 19:25:29.569198 sshd[6448]: Accepted publickey for core from 4.175.71.9 port 34766 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:29.577882 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:29.599599 systemd-logind[2104]: New session 8 of user core. Apr 13 19:25:29.605380 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:25:30.287548 containerd[2133]: time="2026-04-13T19:25:30.287458621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:30.290482 containerd[2133]: time="2026-04-13T19:25:30.290413549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 19:25:30.293217 containerd[2133]: time="2026-04-13T19:25:30.293159281Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:30.302236 containerd[2133]: time="2026-04-13T19:25:30.302041945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:30.306725 containerd[2133]: time="2026-04-13T19:25:30.306637141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.978529111s" Apr 13 19:25:30.307481 containerd[2133]: time="2026-04-13T19:25:30.306912769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:25:30.309919 containerd[2133]: time="2026-04-13T19:25:30.309045193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:25:30.318553 containerd[2133]: time="2026-04-13T19:25:30.318478465Z" level=info msg="CreateContainer within sandbox \"430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:25:30.351174 containerd[2133]: time="2026-04-13T19:25:30.351062089Z" level=info msg="CreateContainer within sandbox \"430ccb1c7aae9538a072e31245c5da029675b660aa845334d729a030ea96ae3f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f9a992278a07dae7fdcc0ff2a948216734b0a111f13a622cc3f975b79ad6a3a0\"" Apr 13 19:25:30.360242 containerd[2133]: time="2026-04-13T19:25:30.356852977Z" level=info msg="StartContainer for \"f9a992278a07dae7fdcc0ff2a948216734b0a111f13a622cc3f975b79ad6a3a0\"" Apr 13 19:25:30.501372 containerd[2133]: time="2026-04-13T19:25:30.501297182Z" level=info msg="StartContainer for \"f9a992278a07dae7fdcc0ff2a948216734b0a111f13a622cc3f975b79ad6a3a0\" returns successfully" Apr 13 19:25:30.546057 sshd[6448]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:30.555518 systemd[1]: sshd@7-172.31.26.195:22-4.175.71.9:34766.service: Deactivated successfully. Apr 13 19:25:30.565476 systemd-logind[2104]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:25:30.566541 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:25:30.570596 systemd-logind[2104]: Removed session 8. Apr 13 19:25:30.618597 kubelet[3617]: I0413 19:25:30.618471 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5d9697dc4b-6kgd9" podStartSLOduration=30.31625724 podStartE2EDuration="43.618447722s" podCreationTimestamp="2026-04-13 19:24:47 +0000 UTC" firstStartedPulling="2026-04-13 19:25:17.006528887 +0000 UTC m=+54.652599897" lastFinishedPulling="2026-04-13 19:25:30.308719381 +0000 UTC m=+67.954790379" observedRunningTime="2026-04-13 19:25:30.615089066 +0000 UTC m=+68.261160040" watchObservedRunningTime="2026-04-13 19:25:30.618447722 +0000 UTC m=+68.264518696" Apr 13 19:25:30.734868 containerd[2133]: time="2026-04-13T19:25:30.734071227Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:30.738421 containerd[2133]: time="2026-04-13T19:25:30.738367239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 19:25:30.744647 containerd[2133]: time="2026-04-13T19:25:30.744555315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 435.448094ms" Apr 13 19:25:30.744900 containerd[2133]: time="2026-04-13T19:25:30.744868539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:25:30.747399 containerd[2133]: time="2026-04-13T19:25:30.747200775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 19:25:30.756174 containerd[2133]: time="2026-04-13T19:25:30.755951379Z" level=info msg="CreateContainer within sandbox \"65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:25:30.794520 containerd[2133]: time="2026-04-13T19:25:30.794448111Z" level=info msg="CreateContainer within sandbox \"65492c6cada604393153b9f509468b7447eed14bf705dddf75da7f99564f6369\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32afd9ed8b4e6dc24634ddeb8be8368d99ef59c5b2fada2453494dc4daf70086\"" Apr 13 19:25:30.798819 containerd[2133]: time="2026-04-13T19:25:30.797752863Z" level=info msg="StartContainer for \"32afd9ed8b4e6dc24634ddeb8be8368d99ef59c5b2fada2453494dc4daf70086\"" Apr 13 19:25:30.993200 containerd[2133]: time="2026-04-13T19:25:30.993141844Z" level=info msg="StartContainer for \"32afd9ed8b4e6dc24634ddeb8be8368d99ef59c5b2fada2453494dc4daf70086\" returns successfully" Apr 13 19:25:31.609854 kubelet[3617]: I0413 19:25:31.606743 3617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:31.635178 kubelet[3617]: I0413 19:25:31.634210 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5d9697dc4b-j7tsc" podStartSLOduration=31.321169777 podStartE2EDuration="44.634183347s" podCreationTimestamp="2026-04-13 19:24:47 +0000 UTC" firstStartedPulling="2026-04-13 19:25:17.433028473 +0000 UTC m=+55.079099435" lastFinishedPulling="2026-04-13 19:25:30.746042043 +0000 UTC m=+68.392113005" observedRunningTime="2026-04-13 19:25:31.631929567 +0000 UTC m=+69.278000565" watchObservedRunningTime="2026-04-13 19:25:31.634183347 +0000 UTC m=+69.280254417" Apr 13 19:25:32.508261 containerd[2133]: time="2026-04-13T19:25:32.508076020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:32.514807 containerd[2133]: time="2026-04-13T19:25:32.513933964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 19:25:32.516991 containerd[2133]: time="2026-04-13T19:25:32.516812908Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:32.528906 containerd[2133]: time="2026-04-13T19:25:32.528812584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:32.532980 containerd[2133]: time="2026-04-13T19:25:32.532883860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.785619281s" Apr 13 19:25:32.532980 containerd[2133]: time="2026-04-13T19:25:32.532959592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 19:25:32.540549 containerd[2133]: time="2026-04-13T19:25:32.537048124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 19:25:32.548951 containerd[2133]: time="2026-04-13T19:25:32.548880892Z" level=info msg="CreateContainer within sandbox \"f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 19:25:32.604248 containerd[2133]: time="2026-04-13T19:25:32.603054280Z" level=info msg="CreateContainer within sandbox \"f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cff3beb0ea7a20064eb4ec843ca3bebac7e74fc230268541ed062ccc340e8400\"" Apr 13 19:25:32.609032 containerd[2133]: time="2026-04-13T19:25:32.608201068Z" level=info msg="StartContainer for \"cff3beb0ea7a20064eb4ec843ca3bebac7e74fc230268541ed062ccc340e8400\"" Apr 13 19:25:32.632120 kubelet[3617]: I0413 19:25:32.631898 3617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:32.912572 containerd[2133]: time="2026-04-13T19:25:32.912490638Z" level=info msg="StartContainer for \"cff3beb0ea7a20064eb4ec843ca3bebac7e74fc230268541ed062ccc340e8400\" returns successfully" Apr 13 19:25:33.904044 update_engine[2114]: I20260413 19:25:33.902046 2114 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 19:25:33.904044 update_engine[2114]: I20260413 19:25:33.904001 2114 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 19:25:33.910480 update_engine[2114]: I20260413 19:25:33.904527 2114 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.916493 2114 omaha_request_params.cc:62] Current group set to lts Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.919278 2114 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.919361 2114 update_attempter.cc:643] Scheduling an action processor start. Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.919400 2114 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.919485 2114 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.919612 2114 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.919632 2114 omaha_request_action.cc:272] Request: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: Apr 13 19:25:33.921635 update_engine[2114]: I20260413 19:25:33.919651 2114 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:25:33.928113 locksmithd[2196]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 19:25:33.941733 update_engine[2114]: I20260413 19:25:33.941641 2114 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:25:33.947945 update_engine[2114]: I20260413 19:25:33.947809 2114 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:25:33.974751 update_engine[2114]: E20260413 19:25:33.974620 2114 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:25:33.976325 update_engine[2114]: I20260413 19:25:33.976109 2114 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 19:25:34.687117 containerd[2133]: time="2026-04-13T19:25:34.687051739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:34.690280 containerd[2133]: time="2026-04-13T19:25:34.689872903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 19:25:34.693301 containerd[2133]: time="2026-04-13T19:25:34.692216107Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:34.699197 containerd[2133]: time="2026-04-13T19:25:34.698979883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:34.705985 containerd[2133]: time="2026-04-13T19:25:34.705727219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.167785431s" Apr 13 19:25:34.705985 containerd[2133]: time="2026-04-13T19:25:34.705818851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 19:25:34.713915 containerd[2133]: time="2026-04-13T19:25:34.712814191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 19:25:34.723170 containerd[2133]: time="2026-04-13T19:25:34.723035467Z" level=info msg="CreateContainer within sandbox \"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 19:25:34.768761 containerd[2133]: time="2026-04-13T19:25:34.768327199Z" level=info msg="CreateContainer within sandbox \"ae728655a2123bf9b5472b8aac0af9dfb238bac32b58c42e6e1cff6560d36f37\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"18fb3e7f51c67561463b9a6cd607705098dccf8fe026b4117d611d96523340e1\"" Apr 13 19:25:34.772264 containerd[2133]: time="2026-04-13T19:25:34.771852991Z" level=info msg="StartContainer for \"18fb3e7f51c67561463b9a6cd607705098dccf8fe026b4117d611d96523340e1\"" Apr 13 19:25:34.977476 containerd[2133]: time="2026-04-13T19:25:34.977299388Z" level=info msg="StartContainer for \"18fb3e7f51c67561463b9a6cd607705098dccf8fe026b4117d611d96523340e1\" returns successfully" Apr 13 19:25:35.692731 kubelet[3617]: I0413 19:25:35.690401 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4pjcq" podStartSLOduration=26.95401924 podStartE2EDuration="44.690374888s" podCreationTimestamp="2026-04-13 19:24:51 +0000 UTC" firstStartedPulling="2026-04-13 19:25:16.974073195 +0000 UTC m=+54.620144157" lastFinishedPulling="2026-04-13 19:25:34.710428831 +0000 UTC m=+72.356499805" observedRunningTime="2026-04-13 19:25:35.68896946 +0000 UTC m=+73.335040434" watchObservedRunningTime="2026-04-13 19:25:35.690374888 +0000 UTC m=+73.336445850" Apr 13 19:25:35.718176 systemd[1]: Started sshd@8-172.31.26.195:22-4.175.71.9:38704.service - OpenSSH per-connection server daemon (4.175.71.9:38704). Apr 13 19:25:35.849785 kubelet[3617]: I0413 19:25:35.849716 3617 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 19:25:35.849785 kubelet[3617]: I0413 19:25:35.849787 3617 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 19:25:36.534503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861122907.mount: Deactivated successfully. Apr 13 19:25:36.556093 containerd[2133]: time="2026-04-13T19:25:36.554467616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:36.556093 containerd[2133]: time="2026-04-13T19:25:36.556032056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 19:25:36.557054 containerd[2133]: time="2026-04-13T19:25:36.557007536Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:36.562314 containerd[2133]: time="2026-04-13T19:25:36.562241888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:36.563994 containerd[2133]: time="2026-04-13T19:25:36.563931980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.850136165s" Apr 13 19:25:36.564149 containerd[2133]: time="2026-04-13T19:25:36.563992916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 19:25:36.572499 containerd[2133]: time="2026-04-13T19:25:36.572433824Z" level=info msg="CreateContainer within sandbox \"f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 19:25:36.597326 containerd[2133]: time="2026-04-13T19:25:36.597267728Z" level=info msg="CreateContainer within sandbox \"f5bddf80767bcedd4f117f0161653fdce5485dfc63eb9e9d13eecc0e10b65e4c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"59a443710b2b02cb90519753d8c197202838c57aac473322eabbf05e94b017ce\"" Apr 13 19:25:36.599057 containerd[2133]: time="2026-04-13T19:25:36.599006012Z" level=info msg="StartContainer for \"59a443710b2b02cb90519753d8c197202838c57aac473322eabbf05e94b017ce\"" Apr 13 19:25:36.736881 containerd[2133]: time="2026-04-13T19:25:36.736804557Z" level=info msg="StartContainer for \"59a443710b2b02cb90519753d8c197202838c57aac473322eabbf05e94b017ce\" returns successfully" Apr 13 19:25:36.747495 sshd[6666]: Accepted publickey for core from 4.175.71.9 port 38704 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:36.754552 sshd[6666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:36.770155 systemd-logind[2104]: New session 9 of user core. Apr 13 19:25:36.775990 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:25:37.585112 sshd[6666]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:37.591855 systemd[1]: sshd@8-172.31.26.195:22-4.175.71.9:38704.service: Deactivated successfully. Apr 13 19:25:37.591954 systemd-logind[2104]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:25:37.600242 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:25:37.602863 systemd-logind[2104]: Removed session 9. Apr 13 19:25:37.709780 kubelet[3617]: I0413 19:25:37.709648 3617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79f47f5d66-bmfqd" podStartSLOduration=4.74237506 podStartE2EDuration="23.709626754s" podCreationTimestamp="2026-04-13 19:25:14 +0000 UTC" firstStartedPulling="2026-04-13 19:25:17.599288078 +0000 UTC m=+55.245359040" lastFinishedPulling="2026-04-13 19:25:36.566539772 +0000 UTC m=+74.212610734" observedRunningTime="2026-04-13 19:25:37.706764526 +0000 UTC m=+75.352835776" watchObservedRunningTime="2026-04-13 19:25:37.709626754 +0000 UTC m=+75.355697728" Apr 13 19:25:39.450382 kubelet[3617]: I0413 19:25:39.449735 3617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:42.746179 systemd[1]: Started sshd@9-172.31.26.195:22-4.175.71.9:38710.service - OpenSSH per-connection server daemon (4.175.71.9:38710). Apr 13 19:25:43.741595 sshd[6742]: Accepted publickey for core from 4.175.71.9 port 38710 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:43.746124 sshd[6742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:43.755378 systemd-logind[2104]: New session 10 of user core. Apr 13 19:25:43.765321 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:25:43.898772 update_engine[2114]: I20260413 19:25:43.898660 2114 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:25:43.899404 update_engine[2114]: I20260413 19:25:43.899067 2114 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:25:43.899487 update_engine[2114]: I20260413 19:25:43.899398 2114 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:25:43.900589 update_engine[2114]: E20260413 19:25:43.900535 2114 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:25:43.900699 update_engine[2114]: I20260413 19:25:43.900626 2114 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 19:25:44.561060 sshd[6742]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:44.568545 systemd[1]: sshd@9-172.31.26.195:22-4.175.71.9:38710.service: Deactivated successfully. Apr 13 19:25:44.577220 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:25:44.577816 systemd-logind[2104]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:25:44.582619 systemd-logind[2104]: Removed session 10. Apr 13 19:25:49.737627 systemd[1]: Started sshd@10-172.31.26.195:22-4.175.71.9:42654.service - OpenSSH per-connection server daemon (4.175.71.9:42654). Apr 13 19:25:50.781738 sshd[6796]: Accepted publickey for core from 4.175.71.9 port 42654 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:50.785402 sshd[6796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:50.794739 systemd-logind[2104]: New session 11 of user core. Apr 13 19:25:50.803391 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:25:51.833617 sshd[6796]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:51.845904 systemd[1]: sshd@10-172.31.26.195:22-4.175.71.9:42654.service: Deactivated successfully. Apr 13 19:25:51.856315 systemd-logind[2104]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:25:51.857353 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:25:51.861812 systemd-logind[2104]: Removed session 11. Apr 13 19:25:52.008988 systemd[1]: Started sshd@11-172.31.26.195:22-4.175.71.9:42656.service - OpenSSH per-connection server daemon (4.175.71.9:42656). Apr 13 19:25:53.062745 sshd[6832]: Accepted publickey for core from 4.175.71.9 port 42656 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:53.065458 sshd[6832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:53.074325 systemd-logind[2104]: New session 12 of user core. Apr 13 19:25:53.079371 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:25:53.899001 update_engine[2114]: I20260413 19:25:53.898918 2114 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:25:53.900484 update_engine[2114]: I20260413 19:25:53.900104 2114 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:25:53.900484 update_engine[2114]: I20260413 19:25:53.900418 2114 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:25:53.902505 update_engine[2114]: E20260413 19:25:53.902333 2114 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:25:53.902505 update_engine[2114]: I20260413 19:25:53.902453 2114 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 19:25:54.110194 sshd[6832]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:54.116137 systemd[1]: sshd@11-172.31.26.195:22-4.175.71.9:42656.service: Deactivated successfully. Apr 13 19:25:54.130626 systemd-logind[2104]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:25:54.131605 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:25:54.139962 systemd-logind[2104]: Removed session 12. Apr 13 19:25:54.270225 systemd[1]: Started sshd@12-172.31.26.195:22-4.175.71.9:42670.service - OpenSSH per-connection server daemon (4.175.71.9:42670). Apr 13 19:25:55.269754 sshd[6866]: Accepted publickey for core from 4.175.71.9 port 42670 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:55.272387 sshd[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:55.280972 systemd-logind[2104]: New session 13 of user core. Apr 13 19:25:55.285865 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:25:56.100045 sshd[6866]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:56.108200 systemd[1]: sshd@12-172.31.26.195:22-4.175.71.9:42670.service: Deactivated successfully. Apr 13 19:25:56.113940 systemd-logind[2104]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:25:56.115821 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:25:56.118207 systemd-logind[2104]: Removed session 13. Apr 13 19:26:01.285347 systemd[1]: Started sshd@13-172.31.26.195:22-4.175.71.9:39000.service - OpenSSH per-connection server daemon (4.175.71.9:39000). Apr 13 19:26:01.859029 kubelet[3617]: I0413 19:26:01.858386 3617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:26:02.320536 sshd[6916]: Accepted publickey for core from 4.175.71.9 port 39000 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:02.323924 sshd[6916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:02.332578 systemd-logind[2104]: New session 14 of user core. Apr 13 19:26:02.339229 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:26:03.167964 sshd[6916]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:03.178245 systemd[1]: sshd@13-172.31.26.195:22-4.175.71.9:39000.service: Deactivated successfully. Apr 13 19:26:03.186974 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:26:03.188544 systemd-logind[2104]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:26:03.191217 systemd-logind[2104]: Removed session 14. Apr 13 19:26:03.334436 systemd[1]: Started sshd@14-172.31.26.195:22-4.175.71.9:39016.service - OpenSSH per-connection server daemon (4.175.71.9:39016). Apr 13 19:26:03.899489 update_engine[2114]: I20260413 19:26:03.898698 2114 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:26:03.899489 update_engine[2114]: I20260413 19:26:03.899065 2114 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:26:03.899489 update_engine[2114]: I20260413 19:26:03.899402 2114 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:26:03.902782 update_engine[2114]: E20260413 19:26:03.900867 2114 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.900978 2114 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901001 2114 omaha_request_action.cc:617] Omaha request response: Apr 13 19:26:03.902782 update_engine[2114]: E20260413 19:26:03.901118 2114 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901160 2114 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901177 2114 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901193 2114 update_attempter.cc:306] Processing Done. Apr 13 19:26:03.902782 update_engine[2114]: E20260413 19:26:03.901221 2114 update_attempter.cc:619] Update failed. Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901239 2114 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901255 2114 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901273 2114 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901387 2114 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901432 2114 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:26:03.902782 update_engine[2114]: I20260413 19:26:03.901450 2114 omaha_request_action.cc:272] Request: Apr 13 19:26:03.902782 update_engine[2114]: Apr 13 19:26:03.902782 update_engine[2114]: Apr 13 19:26:03.904659 update_engine[2114]: Apr 13 19:26:03.904659 update_engine[2114]: Apr 13 19:26:03.904659 update_engine[2114]: Apr 13 19:26:03.904659 update_engine[2114]: Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.901466 2114 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.901794 2114 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902130 2114 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:26:03.904659 update_engine[2114]: E20260413 19:26:03.902431 2114 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902496 2114 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902516 2114 omaha_request_action.cc:617] Omaha request response: Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902534 2114 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902550 2114 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902565 2114 update_attempter.cc:306] Processing Done. Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902581 2114 update_attempter.cc:310] Error event sent. Apr 13 19:26:03.904659 update_engine[2114]: I20260413 19:26:03.902601 2114 update_check_scheduler.cc:74] Next update check in 44m59s Apr 13 19:26:03.906179 locksmithd[2196]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 19:26:03.907756 locksmithd[2196]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 19:26:04.336524 sshd[6932]: Accepted publickey for core from 4.175.71.9 port 39016 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:04.339290 sshd[6932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:04.347529 systemd-logind[2104]: New session 15 of user core. Apr 13 19:26:04.355369 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:05.537043 sshd[6932]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:05.548764 systemd-logind[2104]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:05.550069 systemd[1]: sshd@14-172.31.26.195:22-4.175.71.9:39016.service: Deactivated successfully. Apr 13 19:26:05.557369 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:05.560414 systemd-logind[2104]: Removed session 15. Apr 13 19:26:05.705167 systemd[1]: Started sshd@15-172.31.26.195:22-4.175.71.9:56146.service - OpenSSH per-connection server daemon (4.175.71.9:56146). Apr 13 19:26:06.724046 sshd[6944]: Accepted publickey for core from 4.175.71.9 port 56146 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:06.727752 sshd[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:06.736284 systemd-logind[2104]: New session 16 of user core. Apr 13 19:26:06.744229 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:26:08.528029 sshd[6944]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:08.536448 systemd[1]: sshd@15-172.31.26.195:22-4.175.71.9:56146.service: Deactivated successfully. Apr 13 19:26:08.544411 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:26:08.547977 systemd-logind[2104]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:26:08.550437 systemd-logind[2104]: Removed session 16. Apr 13 19:26:08.696215 systemd[1]: Started sshd@16-172.31.26.195:22-4.175.71.9:56160.service - OpenSSH per-connection server daemon (4.175.71.9:56160). Apr 13 19:26:09.696280 sshd[6976]: Accepted publickey for core from 4.175.71.9 port 56160 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:09.700038 sshd[6976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:09.708570 systemd-logind[2104]: New session 17 of user core. Apr 13 19:26:09.714200 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:26:10.765254 sshd[6976]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:10.774811 systemd-logind[2104]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:26:10.775583 systemd[1]: sshd@16-172.31.26.195:22-4.175.71.9:56160.service: Deactivated successfully. Apr 13 19:26:10.782401 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:26:10.783870 systemd-logind[2104]: Removed session 17. Apr 13 19:26:10.928186 systemd[1]: Started sshd@17-172.31.26.195:22-4.175.71.9:56176.service - OpenSSH per-connection server daemon (4.175.71.9:56176). Apr 13 19:26:11.898762 sshd[6988]: Accepted publickey for core from 4.175.71.9 port 56176 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:11.903923 sshd[6988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:11.913551 systemd-logind[2104]: New session 18 of user core. Apr 13 19:26:11.918235 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:26:12.675424 sshd[6988]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:12.683875 systemd[1]: sshd@17-172.31.26.195:22-4.175.71.9:56176.service: Deactivated successfully. Apr 13 19:26:12.690530 systemd-logind[2104]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:26:12.691562 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:26:12.696604 systemd-logind[2104]: Removed session 18. Apr 13 19:26:17.855447 systemd[1]: Started sshd@18-172.31.26.195:22-4.175.71.9:45854.service - OpenSSH per-connection server daemon (4.175.71.9:45854). Apr 13 19:26:18.882118 sshd[7026]: Accepted publickey for core from 4.175.71.9 port 45854 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:18.884987 sshd[7026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:18.893679 systemd-logind[2104]: New session 19 of user core. Apr 13 19:26:18.902335 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:26:19.700800 sshd[7026]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:19.706925 systemd[1]: sshd@18-172.31.26.195:22-4.175.71.9:45854.service: Deactivated successfully. Apr 13 19:26:19.714786 systemd-logind[2104]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:26:19.716078 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:26:19.718323 systemd-logind[2104]: Removed session 19. Apr 13 19:26:24.868200 systemd[1]: Started sshd@19-172.31.26.195:22-4.175.71.9:45868.service - OpenSSH per-connection server daemon (4.175.71.9:45868). Apr 13 19:26:25.875011 sshd[7064]: Accepted publickey for core from 4.175.71.9 port 45868 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:25.878014 sshd[7064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:25.885963 systemd-logind[2104]: New session 20 of user core. Apr 13 19:26:25.899195 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:26:26.679005 sshd[7064]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:26.685654 systemd[1]: sshd@19-172.31.26.195:22-4.175.71.9:45868.service: Deactivated successfully. Apr 13 19:26:26.694463 systemd-logind[2104]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:26:26.695630 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:26:26.699584 systemd-logind[2104]: Removed session 20. Apr 13 19:26:31.844231 systemd[1]: Started sshd@20-172.31.26.195:22-4.175.71.9:56460.service - OpenSSH per-connection server daemon (4.175.71.9:56460). Apr 13 19:26:32.838140 sshd[7119]: Accepted publickey for core from 4.175.71.9 port 56460 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:32.842275 sshd[7119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:32.852822 systemd-logind[2104]: New session 21 of user core. Apr 13 19:26:32.859423 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:26:33.644257 sshd[7119]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:33.652089 systemd-logind[2104]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:26:33.656330 systemd[1]: sshd@20-172.31.26.195:22-4.175.71.9:56460.service: Deactivated successfully. Apr 13 19:26:33.663872 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:26:33.666940 systemd-logind[2104]: Removed session 21. Apr 13 19:26:38.808273 systemd[1]: Started sshd@21-172.31.26.195:22-4.175.71.9:44130.service - OpenSSH per-connection server daemon (4.175.71.9:44130). Apr 13 19:26:39.775570 sshd[7135]: Accepted publickey for core from 4.175.71.9 port 44130 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:39.782743 sshd[7135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:39.803073 systemd-logind[2104]: New session 22 of user core. Apr 13 19:26:39.811283 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:26:40.581251 sshd[7135]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:40.591439 systemd-logind[2104]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:26:40.592298 systemd[1]: sshd@21-172.31.26.195:22-4.175.71.9:44130.service: Deactivated successfully. Apr 13 19:26:40.599557 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:26:40.602785 systemd-logind[2104]: Removed session 22. Apr 13 19:26:54.660015 kubelet[3617]: E0413 19:26:54.659638 3617 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-195?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 19:26:55.578773 containerd[2133]: time="2026-04-13T19:26:55.577083948Z" level=info msg="shim disconnected" id=d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210 namespace=k8s.io Apr 13 19:26:55.578773 containerd[2133]: time="2026-04-13T19:26:55.577168080Z" level=warning msg="cleaning up after shim disconnected" id=d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210 namespace=k8s.io Apr 13 19:26:55.578773 containerd[2133]: time="2026-04-13T19:26:55.577188984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:55.587215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210-rootfs.mount: Deactivated successfully. Apr 13 19:26:55.753181 containerd[2133]: time="2026-04-13T19:26:55.751323289Z" level=info msg="shim disconnected" id=496e8a912a9d936c683d528876e276f4fa7dd79613bbbddb996f089c59eb0a00 namespace=k8s.io Apr 13 19:26:55.753181 containerd[2133]: time="2026-04-13T19:26:55.751402897Z" level=warning msg="cleaning up after shim disconnected" id=496e8a912a9d936c683d528876e276f4fa7dd79613bbbddb996f089c59eb0a00 namespace=k8s.io Apr 13 19:26:55.753181 containerd[2133]: time="2026-04-13T19:26:55.751423369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:55.758494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-496e8a912a9d936c683d528876e276f4fa7dd79613bbbddb996f089c59eb0a00-rootfs.mount: Deactivated successfully. Apr 13 19:26:55.952041 kubelet[3617]: I0413 19:26:55.950857 3617 scope.go:117] "RemoveContainer" containerID="496e8a912a9d936c683d528876e276f4fa7dd79613bbbddb996f089c59eb0a00" Apr 13 19:26:55.959961 containerd[2133]: time="2026-04-13T19:26:55.959569142Z" level=info msg="CreateContainer within sandbox \"c4e217f373e7b751203f7150f13bb8ef0e01264c66d70a5f46a53e331661f049\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:26:55.960156 kubelet[3617]: I0413 19:26:55.959560 3617 scope.go:117] "RemoveContainer" containerID="d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210" Apr 13 19:26:55.974904 containerd[2133]: time="2026-04-13T19:26:55.974843486Z" level=info msg="CreateContainer within sandbox \"200af34ba52cb16ba002aba47997b960cc1bda22d40ee172674e8f3cccd5a62e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 19:26:55.997812 containerd[2133]: time="2026-04-13T19:26:55.997540251Z" level=info msg="CreateContainer within sandbox \"c4e217f373e7b751203f7150f13bb8ef0e01264c66d70a5f46a53e331661f049\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8d8c17f80f307cf58f849bbcbebca466ecf021b07fac17649e86a40d23ab8031\"" Apr 13 19:26:55.998723 containerd[2133]: time="2026-04-13T19:26:55.998623155Z" level=info msg="StartContainer for \"8d8c17f80f307cf58f849bbcbebca466ecf021b07fac17649e86a40d23ab8031\"" Apr 13 19:26:56.021244 containerd[2133]: time="2026-04-13T19:26:56.021069119Z" level=info msg="CreateContainer within sandbox \"200af34ba52cb16ba002aba47997b960cc1bda22d40ee172674e8f3cccd5a62e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0cfe6666da719c1bee87fe485d9b6bc37f26ae2d13f6d158d005b0350eea8fb7\"" Apr 13 19:26:56.022231 containerd[2133]: time="2026-04-13T19:26:56.022166219Z" level=info msg="StartContainer for \"0cfe6666da719c1bee87fe485d9b6bc37f26ae2d13f6d158d005b0350eea8fb7\"" Apr 13 19:26:56.158622 containerd[2133]: time="2026-04-13T19:26:56.158552363Z" level=info msg="StartContainer for \"8d8c17f80f307cf58f849bbcbebca466ecf021b07fac17649e86a40d23ab8031\" returns successfully" Apr 13 19:26:56.175444 containerd[2133]: time="2026-04-13T19:26:56.174359411Z" level=info msg="StartContainer for \"0cfe6666da719c1bee87fe485d9b6bc37f26ae2d13f6d158d005b0350eea8fb7\" returns successfully" Apr 13 19:27:01.401509 containerd[2133]: time="2026-04-13T19:27:01.401188397Z" level=info msg="shim disconnected" id=c161f79b196d662ef7e02f9e85ece475ebb327e1dff3fcaa1282e3b6f748edb2 namespace=k8s.io Apr 13 19:27:01.401509 containerd[2133]: time="2026-04-13T19:27:01.401268017Z" level=warning msg="cleaning up after shim disconnected" id=c161f79b196d662ef7e02f9e85ece475ebb327e1dff3fcaa1282e3b6f748edb2 namespace=k8s.io Apr 13 19:27:01.401509 containerd[2133]: time="2026-04-13T19:27:01.401292833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:01.408297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c161f79b196d662ef7e02f9e85ece475ebb327e1dff3fcaa1282e3b6f748edb2-rootfs.mount: Deactivated successfully. Apr 13 19:27:01.988178 kubelet[3617]: I0413 19:27:01.988118 3617 scope.go:117] "RemoveContainer" containerID="c161f79b196d662ef7e02f9e85ece475ebb327e1dff3fcaa1282e3b6f748edb2" Apr 13 19:27:01.991943 containerd[2133]: time="2026-04-13T19:27:01.991885976Z" level=info msg="CreateContainer within sandbox \"66ff032a0dc0fe740fb8882e18f4c81d114a0662027e19025776e48fc3a160d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 19:27:02.024380 containerd[2133]: time="2026-04-13T19:27:02.024206152Z" level=info msg="CreateContainer within sandbox \"66ff032a0dc0fe740fb8882e18f4c81d114a0662027e19025776e48fc3a160d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d13699d13ab24e7c3ac0ecc0501dfea3fd9fa7e71ba1066cabe1e42073078e0d\"" Apr 13 19:27:02.025063 containerd[2133]: time="2026-04-13T19:27:02.025004512Z" level=info msg="StartContainer for \"d13699d13ab24e7c3ac0ecc0501dfea3fd9fa7e71ba1066cabe1e42073078e0d\"" Apr 13 19:27:02.028352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372749578.mount: Deactivated successfully. Apr 13 19:27:02.155891 containerd[2133]: time="2026-04-13T19:27:02.155819021Z" level=info msg="StartContainer for \"d13699d13ab24e7c3ac0ecc0501dfea3fd9fa7e71ba1066cabe1e42073078e0d\" returns successfully" Apr 13 19:27:04.661212 kubelet[3617]: E0413 19:27:04.661138 3617 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-195?timeout=10s\": context deadline exceeded" Apr 13 19:27:07.778382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cfe6666da719c1bee87fe485d9b6bc37f26ae2d13f6d158d005b0350eea8fb7-rootfs.mount: Deactivated successfully. Apr 13 19:27:07.791629 containerd[2133]: time="2026-04-13T19:27:07.791534425Z" level=info msg="shim disconnected" id=0cfe6666da719c1bee87fe485d9b6bc37f26ae2d13f6d158d005b0350eea8fb7 namespace=k8s.io Apr 13 19:27:07.792991 containerd[2133]: time="2026-04-13T19:27:07.792478549Z" level=warning msg="cleaning up after shim disconnected" id=0cfe6666da719c1bee87fe485d9b6bc37f26ae2d13f6d158d005b0350eea8fb7 namespace=k8s.io Apr 13 19:27:07.792991 containerd[2133]: time="2026-04-13T19:27:07.792516337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:08.013878 kubelet[3617]: I0413 19:27:08.013668 3617 scope.go:117] "RemoveContainer" containerID="d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210" Apr 13 19:27:08.015430 kubelet[3617]: I0413 19:27:08.015106 3617 scope.go:117] "RemoveContainer" containerID="0cfe6666da719c1bee87fe485d9b6bc37f26ae2d13f6d158d005b0350eea8fb7" Apr 13 19:27:08.015430 kubelet[3617]: E0413 19:27:08.015370 3617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-54v78_tigera-operator(d7096666-0f43-44ac-b9df-3de4b481f6d7)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-54v78" podUID="d7096666-0f43-44ac-b9df-3de4b481f6d7" Apr 13 19:27:08.017204 containerd[2133]: time="2026-04-13T19:27:08.017137906Z" level=info msg="RemoveContainer for \"d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210\"" Apr 13 19:27:08.026756 containerd[2133]: time="2026-04-13T19:27:08.026649550Z" level=info msg="RemoveContainer for \"d0a0249bc640c5c65672a203c9523d4b1308f153d193673efbbcccaa08b04210\" returns successfully"