Apr 17 23:34:01.271365 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 17 23:34:01.271412 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 17 22:13:49 -00 2026 Apr 17 23:34:01.271438 kernel: KASLR disabled due to lack of seed Apr 17 23:34:01.271455 kernel: efi: EFI v2.7 by EDK II Apr 17 23:34:01.271471 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 17 23:34:01.271487 kernel: ACPI: Early table checksum verification disabled Apr 17 23:34:01.271506 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 17 23:34:01.271522 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:34:01.271539 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:34:01.271556 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:34:01.271577 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:34:01.271594 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 17 23:34:01.271610 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 17 23:34:01.271626 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 17 23:34:01.271645 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:34:01.271666 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 17 23:34:01.271684 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 17 23:34:01.271701 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 17 23:34:01.271717 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 17 23:34:01.271734 kernel: printk: bootconsole [uart0] enabled Apr 17 23:34:01.271751 kernel: NUMA: Failed to initialise from firmware Apr 17 23:34:01.271768 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 17 23:34:01.271785 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 17 23:34:01.271802 kernel: Zone ranges: Apr 17 23:34:01.271819 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 17 23:34:01.271836 kernel: DMA32 empty Apr 17 23:34:01.271856 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 17 23:34:01.271874 kernel: Movable zone start for each node Apr 17 23:34:01.271890 kernel: Early memory node ranges Apr 17 23:34:01.271907 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 17 23:34:01.271923 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 17 23:34:01.271940 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 17 23:34:01.271957 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 17 23:34:01.271973 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 17 23:34:01.271990 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 17 23:34:01.272008 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 17 23:34:01.272024 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 17 23:34:01.272041 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 17 23:34:01.272062 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 17 23:34:01.272080 kernel: psci: probing for conduit method from ACPI. Apr 17 23:34:01.272103 kernel: psci: PSCIv1.0 detected in firmware. Apr 17 23:34:01.272617 kernel: psci: Using standard PSCI v0.2 function IDs Apr 17 23:34:01.272643 kernel: psci: Trusted OS migration not required Apr 17 23:34:01.272669 kernel: psci: SMC Calling Convention v1.1 Apr 17 23:34:01.272689 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 17 23:34:01.272707 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 17 23:34:01.272726 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 17 23:34:01.272744 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 17 23:34:01.272762 kernel: Detected PIPT I-cache on CPU0 Apr 17 23:34:01.272780 kernel: CPU features: detected: GIC system register CPU interface Apr 17 23:34:01.272799 kernel: CPU features: detected: Spectre-v2 Apr 17 23:34:01.272817 kernel: CPU features: detected: Spectre-v3a Apr 17 23:34:01.272835 kernel: CPU features: detected: Spectre-BHB Apr 17 23:34:01.272853 kernel: CPU features: detected: ARM erratum 1742098 Apr 17 23:34:01.272876 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 17 23:34:01.272894 kernel: alternatives: applying boot alternatives Apr 17 23:34:01.272915 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:34:01.272934 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:34:01.272952 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:34:01.272971 kernel: Fallback order for Node 0: 0 Apr 17 23:34:01.272989 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 17 23:34:01.273007 kernel: Policy zone: Normal Apr 17 23:34:01.273026 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:34:01.273044 kernel: software IO TLB: area num 2. Apr 17 23:34:01.273088 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 17 23:34:01.273144 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 17 23:34:01.273167 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:34:01.273185 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:34:01.273204 kernel: rcu: RCU event tracing is enabled. Apr 17 23:34:01.273224 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:34:01.273242 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:34:01.273260 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:34:01.273278 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:34:01.273296 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:34:01.273314 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 17 23:34:01.273332 kernel: GICv3: 96 SPIs implemented Apr 17 23:34:01.273365 kernel: GICv3: 0 Extended SPIs implemented Apr 17 23:34:01.273384 kernel: Root IRQ handler: gic_handle_irq Apr 17 23:34:01.273402 kernel: GICv3: GICv3 features: 16 PPIs Apr 17 23:34:01.273420 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 17 23:34:01.273437 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 17 23:34:01.273455 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 17 23:34:01.273474 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 17 23:34:01.273491 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 17 23:34:01.273509 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 17 23:34:01.273527 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 17 23:34:01.273545 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:34:01.273562 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 17 23:34:01.273586 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 17 23:34:01.273604 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 17 23:34:01.273622 kernel: Console: colour dummy device 80x25 Apr 17 23:34:01.273640 kernel: printk: console [tty1] enabled Apr 17 23:34:01.273659 kernel: ACPI: Core revision 20230628 Apr 17 23:34:01.273677 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 17 23:34:01.273695 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:34:01.273714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:34:01.273732 kernel: landlock: Up and running. Apr 17 23:34:01.273753 kernel: SELinux: Initializing. Apr 17 23:34:01.273772 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:34:01.273790 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:34:01.273809 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:34:01.273828 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:34:01.273847 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:34:01.273865 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:34:01.273883 kernel: Platform MSI: ITS@0x10080000 domain created Apr 17 23:34:01.273901 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 17 23:34:01.273924 kernel: Remapping and enabling EFI services. Apr 17 23:34:01.273942 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:34:01.273960 kernel: Detected PIPT I-cache on CPU1 Apr 17 23:34:01.273978 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 17 23:34:01.273997 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 17 23:34:01.274015 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 17 23:34:01.274033 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:34:01.274051 kernel: SMP: Total of 2 processors activated. Apr 17 23:34:01.274068 kernel: CPU features: detected: 32-bit EL0 Support Apr 17 23:34:01.274090 kernel: CPU features: detected: 32-bit EL1 Support Apr 17 23:34:01.274109 kernel: CPU features: detected: CRC32 instructions Apr 17 23:34:01.276495 kernel: CPU: All CPU(s) started at EL1 Apr 17 23:34:01.276530 kernel: alternatives: applying system-wide alternatives Apr 17 23:34:01.276553 kernel: devtmpfs: initialized Apr 17 23:34:01.276573 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:34:01.276591 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:34:01.276610 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:34:01.276629 kernel: SMBIOS 3.0.0 present. Apr 17 23:34:01.276652 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 17 23:34:01.276671 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:34:01.276690 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 17 23:34:01.276709 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 17 23:34:01.276728 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 17 23:34:01.276747 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:34:01.276765 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 Apr 17 23:34:01.276784 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:34:01.276807 kernel: cpuidle: using governor menu Apr 17 23:34:01.276827 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 17 23:34:01.276845 kernel: ASID allocator initialised with 65536 entries Apr 17 23:34:01.276864 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:34:01.276883 kernel: Serial: AMBA PL011 UART driver Apr 17 23:34:01.276901 kernel: Modules: 17488 pages in range for non-PLT usage Apr 17 23:34:01.276920 kernel: Modules: 509008 pages in range for PLT usage Apr 17 23:34:01.276939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:34:01.276957 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:34:01.276980 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 17 23:34:01.276999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 17 23:34:01.277018 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:34:01.277037 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:34:01.277075 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 17 23:34:01.277096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 17 23:34:01.277142 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:34:01.277165 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:34:01.277184 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:34:01.277209 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:34:01.277228 kernel: ACPI: Interpreter enabled Apr 17 23:34:01.277246 kernel: ACPI: Using GIC for interrupt routing Apr 17 23:34:01.277265 kernel: ACPI: MCFG table detected, 1 entries Apr 17 23:34:01.277283 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 17 23:34:01.277604 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:34:01.277821 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 17 23:34:01.278030 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 17 23:34:01.279996 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 17 23:34:01.280240 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 17 23:34:01.280268 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 17 23:34:01.280287 kernel: acpiphp: Slot [1] registered Apr 17 23:34:01.280306 kernel: acpiphp: Slot [2] registered Apr 17 23:34:01.280325 kernel: acpiphp: Slot [3] registered Apr 17 23:34:01.280344 kernel: acpiphp: Slot [4] registered Apr 17 23:34:01.280362 kernel: acpiphp: Slot [5] registered Apr 17 23:34:01.280389 kernel: acpiphp: Slot [6] registered Apr 17 23:34:01.280409 kernel: acpiphp: Slot [7] registered Apr 17 23:34:01.280427 kernel: acpiphp: Slot [8] registered Apr 17 23:34:01.280446 kernel: acpiphp: Slot [9] registered Apr 17 23:34:01.280464 kernel: acpiphp: Slot [10] registered Apr 17 23:34:01.280483 kernel: acpiphp: Slot [11] registered Apr 17 23:34:01.280501 kernel: acpiphp: Slot [12] registered Apr 17 23:34:01.280520 kernel: acpiphp: Slot [13] registered Apr 17 23:34:01.280538 kernel: acpiphp: Slot [14] registered Apr 17 23:34:01.280557 kernel: acpiphp: Slot [15] registered Apr 17 23:34:01.280580 kernel: acpiphp: Slot [16] registered Apr 17 23:34:01.280598 kernel: acpiphp: Slot [17] registered Apr 17 23:34:01.280617 kernel: acpiphp: Slot [18] registered Apr 17 23:34:01.280635 kernel: acpiphp: Slot [19] registered Apr 17 23:34:01.280653 kernel: acpiphp: Slot [20] registered Apr 17 23:34:01.280672 kernel: acpiphp: Slot [21] registered Apr 17 23:34:01.280690 kernel: acpiphp: Slot [22] registered Apr 17 23:34:01.280709 kernel: acpiphp: Slot [23] registered Apr 17 23:34:01.280728 kernel: acpiphp: Slot [24] registered Apr 17 23:34:01.280750 kernel: acpiphp: Slot [25] registered Apr 17 23:34:01.280769 kernel: acpiphp: Slot [26] registered Apr 17 23:34:01.280788 kernel: acpiphp: Slot [27] registered Apr 17 23:34:01.280806 kernel: acpiphp: Slot [28] registered Apr 17 23:34:01.280825 kernel: acpiphp: Slot [29] registered Apr 17 23:34:01.280843 kernel: acpiphp: Slot [30] registered Apr 17 23:34:01.280862 kernel: acpiphp: Slot [31] registered Apr 17 23:34:01.280880 kernel: PCI host bridge to bus 0000:00 Apr 17 23:34:01.282918 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 17 23:34:01.283226 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 17 23:34:01.283422 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 17 23:34:01.283611 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 17 23:34:01.283848 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 17 23:34:01.284081 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 17 23:34:01.287786 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 17 23:34:01.288057 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:34:01.288429 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 17 23:34:01.288660 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 17 23:34:01.288942 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:34:01.289307 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 17 23:34:01.289662 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 17 23:34:01.289891 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 17 23:34:01.291208 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 17 23:34:01.291452 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 17 23:34:01.291649 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 17 23:34:01.291846 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 17 23:34:01.291872 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 17 23:34:01.291892 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 17 23:34:01.291912 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 17 23:34:01.291931 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 17 23:34:01.291960 kernel: iommu: Default domain type: Translated Apr 17 23:34:01.291979 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 17 23:34:01.291998 kernel: efivars: Registered efivars operations Apr 17 23:34:01.292017 kernel: vgaarb: loaded Apr 17 23:34:01.292036 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 17 23:34:01.292055 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:34:01.292074 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:34:01.292093 kernel: pnp: PnP ACPI init Apr 17 23:34:01.292348 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 17 23:34:01.292385 kernel: pnp: PnP ACPI: found 1 devices Apr 17 23:34:01.292405 kernel: NET: Registered PF_INET protocol family Apr 17 23:34:01.292424 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:34:01.292443 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:34:01.292463 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:34:01.292482 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:34:01.292502 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:34:01.292521 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:34:01.292545 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:34:01.292565 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:34:01.292583 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:34:01.292603 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:34:01.292622 kernel: kvm [1]: HYP mode not available Apr 17 23:34:01.292640 kernel: Initialise system trusted keyrings Apr 17 23:34:01.292659 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:34:01.292677 kernel: Key type asymmetric registered Apr 17 23:34:01.292696 kernel: Asymmetric key parser 'x509' registered Apr 17 23:34:01.292719 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 23:34:01.292739 kernel: io scheduler mq-deadline registered Apr 17 23:34:01.292758 kernel: io scheduler kyber registered Apr 17 23:34:01.292777 kernel: io scheduler bfq registered Apr 17 23:34:01.293013 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 17 23:34:01.293043 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 17 23:34:01.293085 kernel: ACPI: button: Power Button [PWRB] Apr 17 23:34:01.293105 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 17 23:34:01.294897 kernel: ACPI: button: Sleep Button [SLPB] Apr 17 23:34:01.294933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:34:01.294955 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 17 23:34:01.295298 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 17 23:34:01.295330 kernel: printk: console [ttyS0] disabled Apr 17 23:34:01.295350 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 17 23:34:01.295369 kernel: printk: console [ttyS0] enabled Apr 17 23:34:01.295388 kernel: printk: bootconsole [uart0] disabled Apr 17 23:34:01.295406 kernel: thunder_xcv, ver 1.0 Apr 17 23:34:01.295425 kernel: thunder_bgx, ver 1.0 Apr 17 23:34:01.295452 kernel: nicpf, ver 1.0 Apr 17 23:34:01.295471 kernel: nicvf, ver 1.0 Apr 17 23:34:01.295709 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 17 23:34:01.295922 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-17T23:34:00 UTC (1776468840) Apr 17 23:34:01.295949 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:34:01.295969 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 17 23:34:01.295988 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 17 23:34:01.296007 kernel: watchdog: Hard watchdog permanently disabled Apr 17 23:34:01.296033 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:34:01.296051 kernel: Segment Routing with IPv6 Apr 17 23:34:01.296070 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:34:01.296089 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:34:01.296107 kernel: Key type dns_resolver registered Apr 17 23:34:01.296153 kernel: registered taskstats version 1 Apr 17 23:34:01.296173 kernel: Loading compiled-in X.509 certificates Apr 17 23:34:01.296193 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 1161289bfc8d953baa9f687fefeecf0e077bc535' Apr 17 23:34:01.296241 kernel: Key type .fscrypt registered Apr 17 23:34:01.296271 kernel: Key type fscrypt-provisioning registered Apr 17 23:34:01.296291 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:34:01.296311 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:34:01.296332 kernel: ima: No architecture policies found Apr 17 23:34:01.296351 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 17 23:34:01.296370 kernel: clk: Disabling unused clocks Apr 17 23:34:01.296389 kernel: Freeing unused kernel memory: 39424K Apr 17 23:34:01.296416 kernel: Run /init as init process Apr 17 23:34:01.296437 kernel: with arguments: Apr 17 23:34:01.296462 kernel: /init Apr 17 23:34:01.296482 kernel: with environment: Apr 17 23:34:01.296501 kernel: HOME=/ Apr 17 23:34:01.296520 kernel: TERM=linux Apr 17 23:34:01.296544 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:34:01.296569 systemd[1]: Detected virtualization amazon. Apr 17 23:34:01.296590 systemd[1]: Detected architecture arm64. Apr 17 23:34:01.296612 systemd[1]: Running in initrd. Apr 17 23:34:01.296638 systemd[1]: No hostname configured, using default hostname. Apr 17 23:34:01.296659 systemd[1]: Hostname set to . Apr 17 23:34:01.296682 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:34:01.296704 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:34:01.296725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:34:01.296746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:34:01.296769 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:34:01.296791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:34:01.296822 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:34:01.296845 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:34:01.296871 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:34:01.296895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:34:01.296917 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:34:01.296938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:34:01.296965 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:34:01.296986 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:34:01.297007 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:34:01.297027 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:34:01.297065 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:34:01.297097 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:34:01.298996 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:34:01.299773 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:34:01.299808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:34:01.299844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:34:01.299867 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:34:01.299888 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:34:01.299909 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:34:01.299930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:34:01.299951 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:34:01.299972 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:34:01.299993 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:34:01.300014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:34:01.300039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:01.300060 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:34:01.300081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:34:01.300103 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:34:01.301370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:34:01.301461 systemd-journald[251]: Collecting audit messages is disabled. Apr 17 23:34:01.301510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:01.301532 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:34:01.301560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:34:01.301582 systemd-journald[251]: Journal started Apr 17 23:34:01.301619 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e7695bbf06c780423e2bb5434f3eb) is 8.0M, max 75.3M, 67.3M free. Apr 17 23:34:01.257184 systemd-modules-load[252]: Inserted module 'overlay' Apr 17 23:34:01.320942 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:34:01.327802 kernel: Bridge firewalling registered Apr 17 23:34:01.327802 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 17 23:34:01.338808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:34:01.340404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:34:01.350201 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:34:01.352700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:34:01.358977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:34:01.403397 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:01.405164 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:34:01.419812 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:34:01.440663 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:34:01.444056 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:01.455400 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:34:01.504772 dracut-cmdline[291]: dracut-dracut-053 Apr 17 23:34:01.516456 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:34:01.533724 systemd-resolved[286]: Positive Trust Anchors: Apr 17 23:34:01.533762 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:34:01.533825 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:34:01.669147 kernel: SCSI subsystem initialized Apr 17 23:34:01.675150 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:34:01.688143 kernel: iscsi: registered transport (tcp) Apr 17 23:34:01.711519 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:34:01.711593 kernel: QLogic iSCSI HBA Driver Apr 17 23:34:01.788140 kernel: random: crng init done Apr 17 23:34:01.788469 systemd-resolved[286]: Defaulting to hostname 'linux'. Apr 17 23:34:01.792540 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:34:01.795533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:34:01.826235 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:34:01.838565 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:34:01.873976 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:34:01.874054 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:34:01.874083 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:34:01.943190 kernel: raid6: neonx8 gen() 6725 MB/s Apr 17 23:34:01.960165 kernel: raid6: neonx4 gen() 6559 MB/s Apr 17 23:34:01.977158 kernel: raid6: neonx2 gen() 5471 MB/s Apr 17 23:34:01.994155 kernel: raid6: neonx1 gen() 3958 MB/s Apr 17 23:34:02.011160 kernel: raid6: int64x8 gen() 3802 MB/s Apr 17 23:34:02.028164 kernel: raid6: int64x4 gen() 3716 MB/s Apr 17 23:34:02.045161 kernel: raid6: int64x2 gen() 3594 MB/s Apr 17 23:34:02.063262 kernel: raid6: int64x1 gen() 2772 MB/s Apr 17 23:34:02.063339 kernel: raid6: using algorithm neonx8 gen() 6725 MB/s Apr 17 23:34:02.082255 kernel: raid6: .... xor() 4837 MB/s, rmw enabled Apr 17 23:34:02.082323 kernel: raid6: using neon recovery algorithm Apr 17 23:34:02.091761 kernel: xor: measuring software checksum speed Apr 17 23:34:02.091837 kernel: 8regs : 11023 MB/sec Apr 17 23:34:02.093028 kernel: 32regs : 11948 MB/sec Apr 17 23:34:02.095452 kernel: arm64_neon : 8626 MB/sec Apr 17 23:34:02.095495 kernel: xor: using function: 32regs (11948 MB/sec) Apr 17 23:34:02.181163 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:34:02.202019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:34:02.217461 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:34:02.250042 systemd-udevd[472]: Using default interface naming scheme 'v255'. Apr 17 23:34:02.260062 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:34:02.271372 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:34:02.312682 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Apr 17 23:34:02.374090 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:34:02.387429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:34:02.515745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:34:02.534961 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:34:02.587532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:34:02.597781 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:34:02.602269 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:34:02.610713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:34:02.630515 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:34:02.682504 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:34:02.752290 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 17 23:34:02.752372 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 17 23:34:02.758493 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:34:02.759303 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:34:02.771170 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:b2:71:96:b6:f1 Apr 17 23:34:02.771354 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:34:02.771600 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:02.776881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:34:02.782483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:34:02.789332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:02.794692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:02.803784 (udev-worker)[536]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:02.810629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:02.850879 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 17 23:34:02.850960 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:34:02.855735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:02.869177 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:34:02.869849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:34:02.880851 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:34:02.880933 kernel: GPT:9289727 != 33554431 Apr 17 23:34:02.880962 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:34:02.883160 kernel: GPT:9289727 != 33554431 Apr 17 23:34:02.883244 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:34:02.883272 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:02.918707 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:03.018159 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (520) Apr 17 23:34:03.033181 kernel: BTRFS: device fsid 6218981f-ef91-4196-be05-d5f6a224b350 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (526) Apr 17 23:34:03.043100 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:34:03.101025 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:34:03.155747 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:34:03.161822 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:34:03.189091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:34:03.207485 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:34:03.224468 disk-uuid[663]: Primary Header is updated. Apr 17 23:34:03.224468 disk-uuid[663]: Secondary Entries is updated. Apr 17 23:34:03.224468 disk-uuid[663]: Secondary Header is updated. Apr 17 23:34:03.236253 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:03.248149 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:03.259039 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:03.261153 kernel: block device autoloading is deprecated and will be removed. Apr 17 23:34:04.257589 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:34:04.263760 disk-uuid[664]: The operation has completed successfully. Apr 17 23:34:04.475734 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:34:04.475973 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:34:04.536469 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:34:04.547078 sh[1011]: Success Apr 17 23:34:04.576199 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 17 23:34:04.697394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:34:04.713337 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:34:04.725579 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:34:04.748566 kernel: BTRFS info (device dm-0): first mount of filesystem 6218981f-ef91-4196-be05-d5f6a224b350 Apr 17 23:34:04.748649 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:04.750928 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:34:04.752750 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:34:04.754097 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:34:04.892239 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:34:04.917525 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:34:04.922448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:34:04.934506 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:34:04.943464 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:34:04.992891 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:04.992964 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:04.994388 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:34:05.012163 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:34:05.032327 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:34:05.038486 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:05.050047 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:34:05.072550 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:34:05.160212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:34:05.184572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:34:05.247488 systemd-networkd[1203]: lo: Link UP Apr 17 23:34:05.247514 systemd-networkd[1203]: lo: Gained carrier Apr 17 23:34:05.250602 systemd-networkd[1203]: Enumeration completed Apr 17 23:34:05.251436 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:34:05.251908 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:05.251916 systemd-networkd[1203]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:34:05.259451 systemd-networkd[1203]: eth0: Link UP Apr 17 23:34:05.259461 systemd-networkd[1203]: eth0: Gained carrier Apr 17 23:34:05.259487 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:05.261160 systemd[1]: Reached target network.target - Network. Apr 17 23:34:05.303278 systemd-networkd[1203]: eth0: DHCPv4 address 172.31.27.239/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:34:05.637371 ignition[1144]: Ignition 2.19.0 Apr 17 23:34:05.637401 ignition[1144]: Stage: fetch-offline Apr 17 23:34:05.641756 ignition[1144]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:05.641806 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:05.643983 ignition[1144]: Ignition finished successfully Apr 17 23:34:05.650694 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:34:05.663476 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:34:05.709309 ignition[1213]: Ignition 2.19.0 Apr 17 23:34:05.709330 ignition[1213]: Stage: fetch Apr 17 23:34:05.710027 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:05.710058 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:05.710900 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:05.729318 ignition[1213]: PUT result: OK Apr 17 23:34:05.733506 ignition[1213]: parsed url from cmdline: "" Apr 17 23:34:05.733676 ignition[1213]: no config URL provided Apr 17 23:34:05.733915 ignition[1213]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:34:05.733948 ignition[1213]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:34:05.733994 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:05.740912 ignition[1213]: PUT result: OK Apr 17 23:34:05.743434 ignition[1213]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:34:05.749790 ignition[1213]: GET result: OK Apr 17 23:34:05.751490 ignition[1213]: parsing config with SHA512: 248640d48046755915d2b77cb004080271f40452d0123fdf6e29dbaedc28b48c01aad3ad21c8b4ff43e309c9479d3c3129917d8cdf3e5e65330595e74db7e437 Apr 17 23:34:05.763514 unknown[1213]: fetched base config from "system" Apr 17 23:34:05.764400 ignition[1213]: fetch: fetch complete Apr 17 23:34:05.763543 unknown[1213]: fetched base config from "system" Apr 17 23:34:05.764415 ignition[1213]: fetch: fetch passed Apr 17 23:34:05.763558 unknown[1213]: fetched user config from "aws" Apr 17 23:34:05.764541 ignition[1213]: Ignition finished successfully Apr 17 23:34:05.776582 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:34:05.790490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:34:05.834199 ignition[1219]: Ignition 2.19.0 Apr 17 23:34:05.834231 ignition[1219]: Stage: kargs Apr 17 23:34:05.836289 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:05.836319 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:05.842439 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:05.845852 ignition[1219]: PUT result: OK Apr 17 23:34:05.851358 ignition[1219]: kargs: kargs passed Apr 17 23:34:05.851480 ignition[1219]: Ignition finished successfully Apr 17 23:34:05.857252 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:34:05.872387 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:34:05.899050 ignition[1225]: Ignition 2.19.0 Apr 17 23:34:05.899082 ignition[1225]: Stage: disks Apr 17 23:34:05.900950 ignition[1225]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:05.900978 ignition[1225]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:05.902294 ignition[1225]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:05.907195 ignition[1225]: PUT result: OK Apr 17 23:34:05.914390 ignition[1225]: disks: disks passed Apr 17 23:34:05.914543 ignition[1225]: Ignition finished successfully Apr 17 23:34:05.920704 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:34:05.925738 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:34:05.929300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:34:05.932962 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:34:05.935447 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:34:05.938491 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:34:05.957516 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:34:06.000597 systemd-fsck[1233]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:34:06.007451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:34:06.019405 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:34:06.124442 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2a4b2d55-130a-4cda-bef1-b1e6ed7bcf6b r/w with ordered data mode. Quota mode: none. Apr 17 23:34:06.125926 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:34:06.129807 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:34:06.145320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:34:06.151404 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:34:06.157465 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:34:06.158759 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:34:06.158821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:34:06.196674 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:34:06.216148 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1252) Apr 17 23:34:06.215818 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:34:06.232660 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:06.232706 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:06.232733 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:34:06.245176 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:34:06.247770 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:34:06.651961 initrd-setup-root[1276]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:34:06.684188 initrd-setup-root[1283]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:34:06.707027 initrd-setup-root[1290]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:34:06.717331 initrd-setup-root[1297]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:34:06.757504 systemd-networkd[1203]: eth0: Gained IPv6LL Apr 17 23:34:07.102735 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:34:07.114592 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:34:07.122816 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:34:07.142765 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:34:07.146654 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:07.195216 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:34:07.203729 ignition[1364]: INFO : Ignition 2.19.0 Apr 17 23:34:07.203729 ignition[1364]: INFO : Stage: mount Apr 17 23:34:07.207905 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:07.207905 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:07.213185 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:07.216738 ignition[1364]: INFO : PUT result: OK Apr 17 23:34:07.222396 ignition[1364]: INFO : mount: mount passed Apr 17 23:34:07.224549 ignition[1364]: INFO : Ignition finished successfully Apr 17 23:34:07.228223 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:34:07.244331 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:34:07.268046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:34:07.300172 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1376) Apr 17 23:34:07.305074 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:34:07.305167 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:34:07.305198 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:34:07.313187 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:34:07.315458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:34:07.355507 ignition[1392]: INFO : Ignition 2.19.0 Apr 17 23:34:07.355507 ignition[1392]: INFO : Stage: files Apr 17 23:34:07.363409 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:07.363409 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:07.363409 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:07.376724 ignition[1392]: INFO : PUT result: OK Apr 17 23:34:07.382758 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:34:07.390147 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:34:07.390147 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:34:07.422958 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:34:07.429696 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:34:07.436426 unknown[1392]: wrote ssh authorized keys file for user: core Apr 17 23:34:07.439156 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:34:07.444022 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:34:07.444022 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 17 23:34:07.557947 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:34:07.836047 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:34:07.836047 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:07.845281 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 17 23:34:08.202731 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:34:08.672436 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:34:08.677735 ignition[1392]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:34:08.687544 ignition[1392]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:34:08.694545 ignition[1392]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:34:08.694545 ignition[1392]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:34:08.694545 ignition[1392]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:34:08.694545 ignition[1392]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:34:08.694545 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:34:08.694545 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:34:08.694545 ignition[1392]: INFO : files: files passed Apr 17 23:34:08.694545 ignition[1392]: INFO : Ignition finished successfully Apr 17 23:34:08.691675 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:34:08.710972 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:34:08.736036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:34:08.754789 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:34:08.755539 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:34:08.776227 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:34:08.776227 initrd-setup-root-after-ignition[1422]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:34:08.785826 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:34:08.791771 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:34:08.798451 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:34:08.819529 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:34:08.866870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:34:08.867067 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:34:08.873089 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:34:08.877682 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:34:08.880922 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:34:08.892555 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:34:08.926399 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:34:08.938402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:34:08.967492 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:34:08.973715 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:34:08.976731 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:34:08.979056 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:34:08.979717 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:34:08.986769 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:34:08.991512 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:34:08.993978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:34:09.006060 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:34:09.008931 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:34:09.012255 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:34:09.019254 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:34:09.026898 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:34:09.030173 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:34:09.034615 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:34:09.036617 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:34:09.036864 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:34:09.047904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:34:09.051974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:34:09.054984 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:34:09.055267 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:34:09.069517 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:34:09.069782 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:34:09.082759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:34:09.083053 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:34:09.086200 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:34:09.086453 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:34:09.102913 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:34:09.109735 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:34:09.111044 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:34:09.126643 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:34:09.130999 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:34:09.131469 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:34:09.135588 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:34:09.165729 ignition[1446]: INFO : Ignition 2.19.0 Apr 17 23:34:09.165729 ignition[1446]: INFO : Stage: umount Apr 17 23:34:09.165729 ignition[1446]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:34:09.165729 ignition[1446]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:34:09.165729 ignition[1446]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:34:09.135846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:34:09.207521 ignition[1446]: INFO : PUT result: OK Apr 17 23:34:09.207521 ignition[1446]: INFO : umount: umount passed Apr 17 23:34:09.207521 ignition[1446]: INFO : Ignition finished successfully Apr 17 23:34:09.160413 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:34:09.169746 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:34:09.177922 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:34:09.178104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:34:09.186653 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:34:09.186788 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:34:09.215556 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:34:09.215673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:34:09.222404 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:34:09.222515 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:34:09.222695 systemd[1]: Stopped target network.target - Network. Apr 17 23:34:09.222976 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:34:09.223068 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:34:09.226003 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:34:09.226641 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:34:09.241977 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:34:09.247532 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:34:09.249726 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:34:09.252082 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:34:09.253254 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:34:09.276834 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:34:09.276937 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:34:09.277855 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:34:09.277963 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:34:09.278548 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:34:09.278628 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:34:09.290189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:34:09.310488 systemd-networkd[1203]: eth0: DHCPv6 lease lost Apr 17 23:34:09.317856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:34:09.320632 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:34:09.325811 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:34:09.326053 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:34:09.335104 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:34:09.336354 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:34:09.342369 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:34:09.342736 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:34:09.359772 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:34:09.361994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:34:09.367495 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:34:09.367618 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:34:09.380356 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:34:09.383066 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:34:09.383221 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:34:09.394695 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:34:09.394817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:09.397353 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:34:09.397454 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:34:09.400402 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:34:09.400514 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:34:09.406786 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:34:09.441728 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:34:09.442672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:34:09.452109 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:34:09.452894 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:34:09.461880 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:34:09.461971 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:34:09.464443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:34:09.464545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:34:09.467279 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:34:09.467378 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:34:09.488695 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:34:09.488816 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:34:09.503483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:34:09.509304 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:34:09.509441 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:34:09.513531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:34:09.513645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:09.518619 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:34:09.519078 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:34:09.551326 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:34:09.553738 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:34:09.560766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:34:09.572418 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:34:09.593801 systemd[1]: Switching root. Apr 17 23:34:09.641764 systemd-journald[251]: Journal stopped Apr 17 23:34:12.147741 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 17 23:34:12.147892 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:34:12.147950 kernel: SELinux: policy capability open_perms=1 Apr 17 23:34:12.147991 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:34:12.148026 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:34:12.148058 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:34:12.148091 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:34:12.148163 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:34:12.148200 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:34:12.148233 kernel: audit: type=1403 audit(1776468850.144:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:34:12.148280 systemd[1]: Successfully loaded SELinux policy in 86.116ms. Apr 17 23:34:12.153097 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.755ms. Apr 17 23:34:12.153185 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:34:12.153222 systemd[1]: Detected virtualization amazon. Apr 17 23:34:12.153255 systemd[1]: Detected architecture arm64. Apr 17 23:34:12.153287 systemd[1]: Detected first boot. Apr 17 23:34:12.153321 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:34:12.153355 zram_generator::config[1489]: No configuration found. Apr 17 23:34:12.153391 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:34:12.153426 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:34:12.153467 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:34:12.153507 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:34:12.153540 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:34:12.153575 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:34:12.153608 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:34:12.153645 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:34:12.153677 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:34:12.153713 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:34:12.153751 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:34:12.153784 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:34:12.153820 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:34:12.153854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:34:12.153887 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:34:12.153921 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:34:12.153955 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:34:12.153988 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:34:12.154020 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:34:12.154060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:34:12.154093 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:34:12.154162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:34:12.154202 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:34:12.154233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:34:12.154274 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:34:12.154307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:34:12.156472 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:34:12.156525 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:34:12.156561 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:34:12.156595 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:34:12.156631 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:34:12.156663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:34:12.156695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:34:12.156727 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:34:12.156758 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:34:12.156788 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:34:12.156827 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:34:12.156858 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:34:12.156889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:34:12.156921 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:34:12.156953 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:34:12.156987 systemd[1]: Reached target machines.target - Containers. Apr 17 23:34:12.157046 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:34:12.157081 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:34:12.157148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:34:12.157226 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:34:12.157266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:34:12.157297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:34:12.157330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:34:12.157374 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:34:12.157407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:34:12.157440 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:34:12.157472 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:34:12.157509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:34:12.157543 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:34:12.157576 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:34:12.157609 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:34:12.157640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:34:12.157671 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:34:12.157701 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:34:12.157731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:34:12.157764 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:34:12.157798 systemd[1]: Stopped verity-setup.service. Apr 17 23:34:12.157829 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:34:12.157859 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:34:12.157889 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:34:12.157926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:34:12.158008 systemd-journald[1569]: Collecting audit messages is disabled. Apr 17 23:34:12.158082 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:34:12.158140 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:34:12.158178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:34:12.158211 systemd-journald[1569]: Journal started Apr 17 23:34:12.158266 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec2e7695bbf06c780423e2bb5434f3eb) is 8.0M, max 75.3M, 67.3M free. Apr 17 23:34:11.553823 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:34:11.645714 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:34:11.646710 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:34:12.173103 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:34:12.173213 kernel: fuse: init (API version 7.39) Apr 17 23:34:12.172731 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:34:12.174272 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:34:12.181470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:34:12.185360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:34:12.190723 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:34:12.191052 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:34:12.208711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:34:12.209964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:34:12.232312 kernel: loop: module loaded Apr 17 23:34:12.233828 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:34:12.236261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:34:12.259177 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:34:12.278295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:34:12.283751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:34:12.286816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:34:12.293253 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:34:12.299987 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:34:12.313596 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:34:12.332553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:34:12.341781 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:34:12.347491 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:34:12.347562 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:34:12.356545 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:34:12.370497 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:34:12.383413 kernel: ACPI: bus type drm_connector registered Apr 17 23:34:12.385697 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:34:12.391084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:34:12.401634 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:34:12.411533 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:34:12.417350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:34:12.423034 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:34:12.435527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:34:12.448553 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:34:12.455684 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:34:12.459262 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:34:12.467583 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:34:12.496267 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:34:12.503771 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:34:12.520721 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:34:12.541152 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:34:12.553560 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:34:12.592502 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec2e7695bbf06c780423e2bb5434f3eb is 74.557ms for 903 entries. Apr 17 23:34:12.592502 systemd-journald[1569]: System Journal (/var/log/journal/ec2e7695bbf06c780423e2bb5434f3eb) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:34:12.705433 systemd-journald[1569]: Received client request to flush runtime journal. Apr 17 23:34:12.705542 kernel: loop0: detected capacity change from 0 to 114432 Apr 17 23:34:12.595024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:34:12.600545 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:34:12.659311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:34:12.666070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:12.686441 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:34:12.709501 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:34:12.743055 udevadm[1631]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:34:12.758371 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:34:12.773831 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:34:12.791187 kernel: loop1: detected capacity change from 0 to 114328 Apr 17 23:34:12.796342 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:34:12.873518 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Apr 17 23:34:12.873565 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Apr 17 23:34:12.890675 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:34:12.905165 kernel: loop2: detected capacity change from 0 to 52536 Apr 17 23:34:13.012161 kernel: loop3: detected capacity change from 0 to 209336 Apr 17 23:34:13.077194 kernel: loop4: detected capacity change from 0 to 114432 Apr 17 23:34:13.092153 kernel: loop5: detected capacity change from 0 to 114328 Apr 17 23:34:13.111153 kernel: loop6: detected capacity change from 0 to 52536 Apr 17 23:34:13.134156 kernel: loop7: detected capacity change from 0 to 209336 Apr 17 23:34:13.159426 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:34:13.161220 (sd-merge)[1643]: Merged extensions into '/usr'. Apr 17 23:34:13.170887 systemd[1]: Reloading requested from client PID 1615 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:34:13.170928 systemd[1]: Reloading... Apr 17 23:34:13.360171 zram_generator::config[1666]: No configuration found. Apr 17 23:34:13.767555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:34:13.919637 systemd[1]: Reloading finished in 747 ms. Apr 17 23:34:13.964652 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:34:13.968448 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:34:13.992090 systemd[1]: Starting ensure-sysext.service... Apr 17 23:34:13.999544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:34:14.008217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:34:14.032394 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:34:14.032425 systemd[1]: Reloading... Apr 17 23:34:14.112735 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:34:14.117256 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:34:14.121645 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:34:14.124097 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Apr 17 23:34:14.124289 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Apr 17 23:34:14.139771 systemd-udevd[1723]: Using default interface naming scheme 'v255'. Apr 17 23:34:14.152353 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:34:14.152375 systemd-tmpfiles[1722]: Skipping /boot Apr 17 23:34:14.243481 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:34:14.243513 systemd-tmpfiles[1722]: Skipping /boot Apr 17 23:34:14.272168 zram_generator::config[1746]: No configuration found. Apr 17 23:34:14.358209 ldconfig[1610]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:34:14.476556 (udev-worker)[1771]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:14.738306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:34:14.823145 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1761) Apr 17 23:34:14.927486 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:34:14.929051 systemd[1]: Reloading finished in 895 ms. Apr 17 23:34:14.962375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:34:14.970621 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:34:14.975269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:34:15.090309 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:34:15.101655 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:34:15.109733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:34:15.116675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:34:15.129658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:34:15.140241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:34:15.142994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:34:15.148851 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:34:15.159365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:34:15.169683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:34:15.181800 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:34:15.188042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:34:15.199594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:34:15.199899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:34:15.226607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:34:15.227008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:34:15.254765 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:34:15.255141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:34:15.295221 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:34:15.298957 systemd[1]: Finished ensure-sysext.service. Apr 17 23:34:15.311419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:34:15.321489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:34:15.329634 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:34:15.341637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:34:15.348668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:34:15.351264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:34:15.355038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:34:15.357716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:34:15.358099 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:34:15.366773 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:34:15.375225 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:34:15.390528 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:34:15.394965 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:34:15.401260 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:34:15.401653 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:34:15.432652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:34:15.436867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:34:15.440833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:34:15.466630 augenrules[1959]: No rules Apr 17 23:34:15.469005 lvm[1944]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:34:15.477332 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:34:15.505319 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:34:15.510539 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:34:15.516194 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:34:15.525308 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:34:15.566252 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:34:15.569718 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:34:15.584541 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:34:15.589338 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:34:15.609307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:34:15.622188 lvm[1971]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:34:15.664820 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:34:15.737502 systemd-networkd[1925]: lo: Link UP Apr 17 23:34:15.737528 systemd-networkd[1925]: lo: Gained carrier Apr 17 23:34:15.740945 systemd-networkd[1925]: Enumeration completed Apr 17 23:34:15.741241 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:34:15.744484 systemd-networkd[1925]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:15.744510 systemd-networkd[1925]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:34:15.747034 systemd-networkd[1925]: eth0: Link UP Apr 17 23:34:15.747435 systemd-networkd[1925]: eth0: Gained carrier Apr 17 23:34:15.747472 systemd-networkd[1925]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:34:15.747940 systemd-resolved[1926]: Positive Trust Anchors: Apr 17 23:34:15.747964 systemd-resolved[1926]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:34:15.748026 systemd-resolved[1926]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:34:15.758318 systemd-networkd[1925]: eth0: DHCPv4 address 172.31.27.239/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:34:15.759594 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:34:15.775345 systemd-resolved[1926]: Defaulting to hostname 'linux'. Apr 17 23:34:15.778704 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:34:15.781343 systemd[1]: Reached target network.target - Network. Apr 17 23:34:15.783518 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:34:15.786377 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:34:15.789043 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:34:15.791912 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:34:15.795095 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:34:15.797717 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:34:15.800559 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:34:15.803367 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:34:15.803423 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:34:15.805535 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:34:15.808393 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:34:15.813632 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:34:15.822355 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:34:15.825901 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:34:15.828528 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:34:15.830861 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:34:15.833102 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:34:15.833182 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:34:15.840289 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:34:15.846465 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:34:15.861760 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:34:15.868011 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:34:15.874875 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:34:15.877315 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:34:15.880497 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:34:15.894394 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:34:15.902378 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:34:15.909839 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:34:15.920536 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:34:15.939657 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:34:15.951442 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:34:15.955673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:34:15.956599 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:34:15.961540 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:34:15.969556 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:34:15.994970 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:34:16.003812 jq[1988]: false Apr 17 23:34:15.997531 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:34:16.007831 dbus-daemon[1987]: [system] SELinux support is enabled Apr 17 23:34:16.008166 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:34:16.026707 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:34:16.026767 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:34:16.041085 dbus-daemon[1987]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1925 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:34:16.029682 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:34:16.029719 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:34:16.053142 jq[2000]: true Apr 17 23:34:16.065043 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:34:16.067327 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:34:16.084737 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:34:16.114811 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:34:16.115625 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:34:16.154638 jq[2014]: true Apr 17 23:34:16.177145 tar[2006]: linux-arm64/LICENSE Apr 17 23:34:16.177145 tar[2006]: linux-arm64/helm Apr 17 23:34:16.182529 extend-filesystems[1989]: Found loop4 Apr 17 23:34:16.182529 extend-filesystems[1989]: Found loop5 Apr 17 23:34:16.182529 extend-filesystems[1989]: Found loop6 Apr 17 23:34:16.182529 extend-filesystems[1989]: Found loop7 Apr 17 23:34:16.182529 extend-filesystems[1989]: Found nvme0n1 Apr 17 23:34:16.182529 extend-filesystems[1989]: Found nvme0n1p2 Apr 17 23:34:16.207317 extend-filesystems[1989]: Found nvme0n1p3 Apr 17 23:34:16.207317 extend-filesystems[1989]: Found usr Apr 17 23:34:16.207317 extend-filesystems[1989]: Found nvme0n1p4 Apr 17 23:34:16.207317 extend-filesystems[1989]: Found nvme0n1p6 Apr 17 23:34:16.207317 extend-filesystems[1989]: Found nvme0n1p7 Apr 17 23:34:16.207317 extend-filesystems[1989]: Found nvme0n1p9 Apr 17 23:34:16.207317 extend-filesystems[1989]: Checking size of /dev/nvme0n1p9 Apr 17 23:34:16.241873 (ntainerd)[2021]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:34:16.277326 update_engine[1999]: I20260417 23:34:16.272814 1999 main.cc:92] Flatcar Update Engine starting Apr 17 23:34:16.278376 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:34:16.284668 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:34:16.294160 update_engine[1999]: I20260417 23:34:16.290354 1999 update_check_scheduler.cc:74] Next update check in 8m14s Apr 17 23:34:16.296498 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:34:16.310585 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:13 UTC 2026 (1): Starting Apr 17 23:34:16.311129 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:13 UTC 2026 (1): Starting Apr 17 23:34:16.311450 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:34:16.314543 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:34:16.314543 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: ---------------------------------------------------- Apr 17 23:34:16.314543 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:34:16.314543 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:34:16.314543 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: corporation. Support and training for ntp-4 are Apr 17 23:34:16.314543 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: available at https://www.nwtime.org/support Apr 17 23:34:16.314543 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: ---------------------------------------------------- Apr 17 23:34:16.311480 ntpd[1991]: ---------------------------------------------------- Apr 17 23:34:16.311500 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:34:16.311519 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:34:16.311538 ntpd[1991]: corporation. Support and training for ntp-4 are Apr 17 23:34:16.311559 ntpd[1991]: available at https://www.nwtime.org/support Apr 17 23:34:16.311577 ntpd[1991]: ---------------------------------------------------- Apr 17 23:34:16.322655 ntpd[1991]: proto: precision = 0.108 usec (-23) Apr 17 23:34:16.324517 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: proto: precision = 0.108 usec (-23) Apr 17 23:34:16.324517 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: basedate set to 2026-04-05 Apr 17 23:34:16.324517 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: gps base set to 2026-04-05 (week 2413) Apr 17 23:34:16.323067 ntpd[1991]: basedate set to 2026-04-05 Apr 17 23:34:16.323092 ntpd[1991]: gps base set to 2026-04-05 (week 2413) Apr 17 23:34:16.332614 extend-filesystems[1989]: Resized partition /dev/nvme0n1p9 Apr 17 23:34:16.335547 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:34:16.337445 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:34:16.337499 extend-filesystems[2043]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:34:16.338416 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Listen normally on 3 eth0 172.31.27.239:123 Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Listen normally on 4 lo [::1]:123 Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: bind(21) AF_INET6 fe80::4b2:71ff:fe96:b6f1%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: unable to create socket on eth0 (5) for fe80::4b2:71ff:fe96:b6f1%2#123 Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: failed to init interface for address fe80::4b2:71ff:fe96:b6f1%2 Apr 17 23:34:16.347743 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: Listening on routing socket on fd #21 for interface updates Apr 17 23:34:16.339973 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:34:16.340040 ntpd[1991]: Listen normally on 3 eth0 172.31.27.239:123 Apr 17 23:34:16.340107 ntpd[1991]: Listen normally on 4 lo [::1]:123 Apr 17 23:34:16.340609 ntpd[1991]: bind(21) AF_INET6 fe80::4b2:71ff:fe96:b6f1%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:34:16.340651 ntpd[1991]: unable to create socket on eth0 (5) for fe80::4b2:71ff:fe96:b6f1%2#123 Apr 17 23:34:16.340680 ntpd[1991]: failed to init interface for address fe80::4b2:71ff:fe96:b6f1%2 Apr 17 23:34:16.340737 ntpd[1991]: Listening on routing socket on fd #21 for interface updates Apr 17 23:34:16.354977 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:34:16.355067 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:16.355067 ntpd[1991]: 17 Apr 23:34:16 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:16.351800 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:16.351846 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:34:16.476315 systemd-logind[1998]: Watching system buttons on /dev/input/event0 (Power Button) Apr 17 23:34:16.476360 systemd-logind[1998]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 17 23:34:16.476719 systemd-logind[1998]: New seat seat0. Apr 17 23:34:16.479283 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:34:16.506490 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch failed with 404: resource not found Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:34:16.519449 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.530258 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:34:16.530258 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetch successful Apr 17 23:34:16.530258 coreos-metadata[1986]: Apr 17 23:34:16.519 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:34:16.530258 coreos-metadata[1986]: Apr 17 23:34:16.520 INFO Fetch successful Apr 17 23:34:16.530491 extend-filesystems[2043]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:34:16.530491 extend-filesystems[2043]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:34:16.530491 extend-filesystems[2043]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:34:16.542167 bash[2056]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:34:16.541837 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:34:16.544861 extend-filesystems[1989]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:34:16.544861 extend-filesystems[1989]: Found nvme0n1p1 Apr 17 23:34:16.542319 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:34:16.574345 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:34:16.583811 systemd[1]: Starting sshkeys.service... Apr 17 23:34:16.665097 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:34:16.669133 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:34:16.711628 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:34:16.725786 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:34:16.733425 dbus-daemon[1987]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2017 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:34:16.734714 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:34:16.738419 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:34:16.750823 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:34:16.839169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1754) Apr 17 23:34:16.843068 polkitd[2077]: Started polkitd version 121 Apr 17 23:34:16.888752 polkitd[2077]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:34:16.888879 polkitd[2077]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:34:16.899363 polkitd[2077]: Finished loading, compiling and executing 2 rules Apr 17 23:34:16.902485 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:34:16.902932 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:34:16.907170 polkitd[2077]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:34:16.968384 systemd-hostnamed[2017]: Hostname set to (transient) Apr 17 23:34:16.968559 systemd-resolved[1926]: System hostname changed to 'ip-172-31-27-239'. Apr 17 23:34:16.995352 coreos-metadata[2073]: Apr 17 23:34:16.994 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:34:16.998212 coreos-metadata[2073]: Apr 17 23:34:16.996 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:34:16.999075 coreos-metadata[2073]: Apr 17 23:34:16.998 INFO Fetch successful Apr 17 23:34:16.999075 coreos-metadata[2073]: Apr 17 23:34:16.998 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:34:17.000513 coreos-metadata[2073]: Apr 17 23:34:17.000 INFO Fetch successful Apr 17 23:34:17.003138 unknown[2073]: wrote ssh authorized keys file for user: core Apr 17 23:34:17.036129 containerd[2021]: time="2026-04-17T23:34:17.035973838Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:34:17.089542 update-ssh-keys[2126]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:34:17.093452 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:34:17.103941 systemd[1]: Finished sshkeys.service. Apr 17 23:34:17.228266 containerd[2021]: time="2026-04-17T23:34:17.228164003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:17.233067 containerd[2021]: time="2026-04-17T23:34:17.232997579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:17.233251 containerd[2021]: time="2026-04-17T23:34:17.233221763Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:34:17.233400 containerd[2021]: time="2026-04-17T23:34:17.233369987Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:34:17.233785 containerd[2021]: time="2026-04-17T23:34:17.233739851Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:34:17.234928 containerd[2021]: time="2026-04-17T23:34:17.234590831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:17.234928 containerd[2021]: time="2026-04-17T23:34:17.234768239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:17.234928 containerd[2021]: time="2026-04-17T23:34:17.234799139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:17.235401 containerd[2021]: time="2026-04-17T23:34:17.235362203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:17.237748 containerd[2021]: time="2026-04-17T23:34:17.236521103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:17.237748 containerd[2021]: time="2026-04-17T23:34:17.236576903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:17.237748 containerd[2021]: time="2026-04-17T23:34:17.236606327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:17.237748 containerd[2021]: time="2026-04-17T23:34:17.236843459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:17.238993 containerd[2021]: time="2026-04-17T23:34:17.238355099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:34:17.238993 containerd[2021]: time="2026-04-17T23:34:17.238621271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:34:17.238993 containerd[2021]: time="2026-04-17T23:34:17.238666475Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:34:17.238993 containerd[2021]: time="2026-04-17T23:34:17.238835411Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:34:17.238993 containerd[2021]: time="2026-04-17T23:34:17.238932491Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:34:17.251817 containerd[2021]: time="2026-04-17T23:34:17.250901507Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:34:17.251817 containerd[2021]: time="2026-04-17T23:34:17.251028527Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:34:17.251817 containerd[2021]: time="2026-04-17T23:34:17.251384675Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:34:17.251817 containerd[2021]: time="2026-04-17T23:34:17.251426843Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:34:17.251817 containerd[2021]: time="2026-04-17T23:34:17.251462963Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:34:17.251817 containerd[2021]: time="2026-04-17T23:34:17.251730647Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:34:17.252936 containerd[2021]: time="2026-04-17T23:34:17.252897539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:34:17.253790 containerd[2021]: time="2026-04-17T23:34:17.253749119Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:34:17.253929 containerd[2021]: time="2026-04-17T23:34:17.253900739Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:34:17.254049 containerd[2021]: time="2026-04-17T23:34:17.254021303Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:34:17.254228 containerd[2021]: time="2026-04-17T23:34:17.254198027Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.254340 containerd[2021]: time="2026-04-17T23:34:17.254312963Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.254448 containerd[2021]: time="2026-04-17T23:34:17.254421179Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254571263Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254611187Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254642039Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254689763Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254721971Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254762447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254808455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254838863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254869283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254898263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254928443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254957003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.254989091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.255547 containerd[2021]: time="2026-04-17T23:34:17.255020795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255053747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255083651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255113939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255187199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255223139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255267827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255296315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.256200 containerd[2021]: time="2026-04-17T23:34:17.255322319Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.257797451Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.258188351Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.258218291Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.258247631Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.258271883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.258301367Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.258325439Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:34:17.260052 containerd[2021]: time="2026-04-17T23:34:17.258350579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:34:17.260597 containerd[2021]: time="2026-04-17T23:34:17.258973247Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:34:17.260597 containerd[2021]: time="2026-04-17T23:34:17.259085483Z" level=info msg="Connect containerd service" Apr 17 23:34:17.260597 containerd[2021]: time="2026-04-17T23:34:17.259166891Z" level=info msg="using legacy CRI server" Apr 17 23:34:17.260597 containerd[2021]: time="2026-04-17T23:34:17.259188083Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:34:17.260597 containerd[2021]: time="2026-04-17T23:34:17.259339187Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:34:17.265132 containerd[2021]: time="2026-04-17T23:34:17.263694899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:34:17.265132 containerd[2021]: time="2026-04-17T23:34:17.264399251Z" level=info msg="Start subscribing containerd event" Apr 17 23:34:17.265132 containerd[2021]: time="2026-04-17T23:34:17.264493835Z" level=info msg="Start recovering state" Apr 17 23:34:17.265132 containerd[2021]: time="2026-04-17T23:34:17.264611687Z" level=info msg="Start event monitor" Apr 17 23:34:17.265132 containerd[2021]: time="2026-04-17T23:34:17.264635171Z" level=info msg="Start snapshots syncer" Apr 17 23:34:17.265132 containerd[2021]: time="2026-04-17T23:34:17.264656735Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:34:17.265132 containerd[2021]: time="2026-04-17T23:34:17.264674939Z" level=info msg="Start streaming server" Apr 17 23:34:17.268108 containerd[2021]: time="2026-04-17T23:34:17.267487319Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:34:17.268108 containerd[2021]: time="2026-04-17T23:34:17.267605039Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:34:17.269258 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:34:17.285094 containerd[2021]: time="2026-04-17T23:34:17.283574291Z" level=info msg="containerd successfully booted in 0.234635s" Apr 17 23:34:17.304650 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:34:17.312920 ntpd[1991]: bind(24) AF_INET6 fe80::4b2:71ff:fe96:b6f1%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:34:17.313019 ntpd[1991]: unable to create socket on eth0 (6) for fe80::4b2:71ff:fe96:b6f1%2#123 Apr 17 23:34:17.313516 ntpd[1991]: 17 Apr 23:34:17 ntpd[1991]: bind(24) AF_INET6 fe80::4b2:71ff:fe96:b6f1%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:34:17.313516 ntpd[1991]: 17 Apr 23:34:17 ntpd[1991]: unable to create socket on eth0 (6) for fe80::4b2:71ff:fe96:b6f1%2#123 Apr 17 23:34:17.313516 ntpd[1991]: 17 Apr 23:34:17 ntpd[1991]: failed to init interface for address fe80::4b2:71ff:fe96:b6f1%2 Apr 17 23:34:17.313052 ntpd[1991]: failed to init interface for address fe80::4b2:71ff:fe96:b6f1%2 Apr 17 23:34:17.361259 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:34:17.446222 systemd-networkd[1925]: eth0: Gained IPv6LL Apr 17 23:34:17.455053 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:34:17.459398 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:34:17.472625 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:34:17.487533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:17.493100 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:34:17.613497 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:34:17.654419 amazon-ssm-agent[2193]: Initializing new seelog logger Apr 17 23:34:17.654893 amazon-ssm-agent[2193]: New Seelog Logger Creation Complete Apr 17 23:34:17.657769 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.657769 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.657769 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 processing appconfig overrides Apr 17 23:34:17.660137 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.660137 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.660137 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 processing appconfig overrides Apr 17 23:34:17.660137 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.660137 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.660137 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 processing appconfig overrides Apr 17 23:34:17.664149 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO Proxy environment variables: Apr 17 23:34:17.671468 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.671468 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:34:17.671651 amazon-ssm-agent[2193]: 2026/04/17 23:34:17 processing appconfig overrides Apr 17 23:34:17.733817 sshd_keygen[2026]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:34:17.767105 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO http_proxy: Apr 17 23:34:17.841369 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:34:17.856812 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:34:17.862809 systemd[1]: Started sshd@0-172.31.27.239:22-4.175.71.9:33398.service - OpenSSH per-connection server daemon (4.175.71.9:33398). Apr 17 23:34:17.876442 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO no_proxy: Apr 17 23:34:17.883016 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:34:17.883422 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:34:17.894806 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:34:17.919842 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:34:17.930722 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:34:17.937869 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:34:17.941606 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:34:17.975249 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO https_proxy: Apr 17 23:34:18.076434 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:34:18.174839 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:34:18.264647 tar[2006]: linux-arm64/README.md Apr 17 23:34:18.274283 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO Agent will take identity from EC2 Apr 17 23:34:18.308181 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:34:18.373709 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:34:18.473259 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:34:18.572333 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:34:18.609856 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [Registrar] Starting registrar module Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:18 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:18 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:18 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:34:18.610370 amazon-ssm-agent[2193]: 2026-04-17 23:34:18 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:34:18.672919 amazon-ssm-agent[2193]: 2026-04-17 23:34:18 INFO [CredentialRefresher] Next credential rotation will be in 31.391653551066668 minutes Apr 17 23:34:19.012979 sshd[2218]: Accepted publickey for core from 4.175.71.9 port 33398 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:19.017660 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:19.039924 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:34:19.054733 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:34:19.062354 systemd-logind[1998]: New session 1 of user core. Apr 17 23:34:19.095415 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:34:19.114916 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:34:19.135755 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:34:19.162490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:19.165878 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:34:19.177792 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:34:19.391816 systemd[2235]: Queued start job for default target default.target. Apr 17 23:34:19.400092 systemd[2235]: Created slice app.slice - User Application Slice. Apr 17 23:34:19.400188 systemd[2235]: Reached target paths.target - Paths. Apr 17 23:34:19.400221 systemd[2235]: Reached target timers.target - Timers. Apr 17 23:34:19.402839 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:34:19.434894 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:34:19.436205 systemd[2235]: Reached target sockets.target - Sockets. Apr 17 23:34:19.436261 systemd[2235]: Reached target basic.target - Basic System. Apr 17 23:34:19.436365 systemd[2235]: Reached target default.target - Main User Target. Apr 17 23:34:19.436444 systemd[2235]: Startup finished in 262ms. Apr 17 23:34:19.436556 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:34:19.447400 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:34:19.450357 systemd[1]: Startup finished in 1.207s (kernel) + 9.294s (initrd) + 9.390s (userspace) = 19.892s. Apr 17 23:34:19.639305 amazon-ssm-agent[2193]: 2026-04-17 23:34:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:34:19.741259 amazon-ssm-agent[2193]: 2026-04-17 23:34:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2259) started Apr 17 23:34:19.844244 amazon-ssm-agent[2193]: 2026-04-17 23:34:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:34:20.146416 kubelet[2242]: E0417 23:34:20.146203 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:34:20.152211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:34:20.152908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:34:20.153623 systemd[1]: kubelet.service: Consumed 1.365s CPU time. Apr 17 23:34:20.179648 systemd[1]: Started sshd@1-172.31.27.239:22-4.175.71.9:33414.service - OpenSSH per-connection server daemon (4.175.71.9:33414). Apr 17 23:34:20.312496 ntpd[1991]: Listen normally on 7 eth0 [fe80::4b2:71ff:fe96:b6f1%2]:123 Apr 17 23:34:20.312954 ntpd[1991]: 17 Apr 23:34:20 ntpd[1991]: Listen normally on 7 eth0 [fe80::4b2:71ff:fe96:b6f1%2]:123 Apr 17 23:34:21.205153 sshd[2273]: Accepted publickey for core from 4.175.71.9 port 33414 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:21.207806 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:21.215279 systemd-logind[1998]: New session 2 of user core. Apr 17 23:34:21.227388 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:34:21.911403 sshd[2273]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:21.919181 systemd-logind[1998]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:34:21.919612 systemd[1]: sshd@1-172.31.27.239:22-4.175.71.9:33414.service: Deactivated successfully. Apr 17 23:34:21.923993 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:34:21.927528 systemd-logind[1998]: Removed session 2. Apr 17 23:34:22.090445 systemd[1]: Started sshd@2-172.31.27.239:22-4.175.71.9:33430.service - OpenSSH per-connection server daemon (4.175.71.9:33430). Apr 17 23:34:23.119654 sshd[2280]: Accepted publickey for core from 4.175.71.9 port 33430 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:23.122927 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:23.130406 systemd-logind[1998]: New session 3 of user core. Apr 17 23:34:23.142382 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:34:22.969045 systemd-resolved[1926]: Clock change detected. Flushing caches. Apr 17 23:34:22.975462 systemd-journald[1569]: Time jumped backwards, rotating. Apr 17 23:34:23.471952 sshd[2280]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:23.478158 systemd[1]: sshd@2-172.31.27.239:22-4.175.71.9:33430.service: Deactivated successfully. Apr 17 23:34:23.481774 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:34:23.486630 systemd-logind[1998]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:34:23.488925 systemd-logind[1998]: Removed session 3. Apr 17 23:34:23.648314 systemd[1]: Started sshd@3-172.31.27.239:22-4.175.71.9:33446.service - OpenSSH per-connection server daemon (4.175.71.9:33446). Apr 17 23:34:24.635838 sshd[2288]: Accepted publickey for core from 4.175.71.9 port 33446 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:24.637175 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:24.645887 systemd-logind[1998]: New session 4 of user core. Apr 17 23:34:24.653041 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:34:25.318918 sshd[2288]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:25.327063 systemd-logind[1998]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:34:25.327506 systemd[1]: sshd@3-172.31.27.239:22-4.175.71.9:33446.service: Deactivated successfully. Apr 17 23:34:25.332381 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:34:25.335646 systemd-logind[1998]: Removed session 4. Apr 17 23:34:25.506309 systemd[1]: Started sshd@4-172.31.27.239:22-4.175.71.9:35244.service - OpenSSH per-connection server daemon (4.175.71.9:35244). Apr 17 23:34:26.491467 sshd[2295]: Accepted publickey for core from 4.175.71.9 port 35244 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:26.494178 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:26.502902 systemd-logind[1998]: New session 5 of user core. Apr 17 23:34:26.509080 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:34:27.033424 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:34:27.034679 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:34:27.052608 sudo[2298]: pam_unix(sudo:session): session closed for user root Apr 17 23:34:27.212993 sshd[2295]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:27.218615 systemd-logind[1998]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:34:27.220949 systemd[1]: sshd@4-172.31.27.239:22-4.175.71.9:35244.service: Deactivated successfully. Apr 17 23:34:27.224026 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:34:27.227615 systemd-logind[1998]: Removed session 5. Apr 17 23:34:27.405338 systemd[1]: Started sshd@5-172.31.27.239:22-4.175.71.9:35250.service - OpenSSH per-connection server daemon (4.175.71.9:35250). Apr 17 23:34:28.425474 sshd[2303]: Accepted publickey for core from 4.175.71.9 port 35250 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:28.428174 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:28.435227 systemd-logind[1998]: New session 6 of user core. Apr 17 23:34:28.443080 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:34:28.967172 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:34:28.967869 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:34:28.974081 sudo[2307]: pam_unix(sudo:session): session closed for user root Apr 17 23:34:28.984161 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:34:28.984864 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:34:29.007331 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:34:29.015196 auditctl[2310]: No rules Apr 17 23:34:29.016219 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:34:29.016559 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:34:29.027219 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:34:29.077429 augenrules[2328]: No rules Apr 17 23:34:29.080927 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:34:29.083867 sudo[2306]: pam_unix(sudo:session): session closed for user root Apr 17 23:34:29.248975 sshd[2303]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:29.256092 systemd[1]: sshd@5-172.31.27.239:22-4.175.71.9:35250.service: Deactivated successfully. Apr 17 23:34:29.261375 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:34:29.262690 systemd-logind[1998]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:34:29.264440 systemd-logind[1998]: Removed session 6. Apr 17 23:34:29.418337 systemd[1]: Started sshd@6-172.31.27.239:22-4.175.71.9:35256.service - OpenSSH per-connection server daemon (4.175.71.9:35256). Apr 17 23:34:29.866574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:34:29.877617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:30.221895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:30.238244 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:34:30.311058 kubelet[2346]: E0417 23:34:30.310971 2346 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:34:30.318667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:34:30.320032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:34:30.417397 sshd[2336]: Accepted publickey for core from 4.175.71.9 port 35256 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:34:30.420551 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:30.428953 systemd-logind[1998]: New session 7 of user core. Apr 17 23:34:30.437057 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:34:30.945161 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:34:30.945873 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:34:31.438296 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:34:31.444317 (dockerd)[2370]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:34:31.850256 dockerd[2370]: time="2026-04-17T23:34:31.850077736Z" level=info msg="Starting up" Apr 17 23:34:31.980684 systemd[1]: var-lib-docker-metacopy\x2dcheck255287027-merged.mount: Deactivated successfully. Apr 17 23:34:31.993481 dockerd[2370]: time="2026-04-17T23:34:31.993419428Z" level=info msg="Loading containers: start." Apr 17 23:34:32.162842 kernel: Initializing XFRM netlink socket Apr 17 23:34:32.199313 (udev-worker)[2392]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:32.308325 systemd-networkd[1925]: docker0: Link UP Apr 17 23:34:32.331265 dockerd[2370]: time="2026-04-17T23:34:32.331192526Z" level=info msg="Loading containers: done." Apr 17 23:34:32.356870 dockerd[2370]: time="2026-04-17T23:34:32.356740718Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:34:32.357062 dockerd[2370]: time="2026-04-17T23:34:32.356993774Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:34:32.357316 dockerd[2370]: time="2026-04-17T23:34:32.357262862Z" level=info msg="Daemon has completed initialization" Apr 17 23:34:32.406522 dockerd[2370]: time="2026-04-17T23:34:32.406437182Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:34:32.407071 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:34:33.231655 containerd[2021]: time="2026-04-17T23:34:33.231591014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:34:33.879693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692646095.mount: Deactivated successfully. Apr 17 23:34:35.375777 containerd[2021]: time="2026-04-17T23:34:35.375703949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:35.378847 containerd[2021]: time="2026-04-17T23:34:35.378146405Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 17 23:34:35.378847 containerd[2021]: time="2026-04-17T23:34:35.378385301Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:35.385644 containerd[2021]: time="2026-04-17T23:34:35.385528625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:35.389008 containerd[2021]: time="2026-04-17T23:34:35.388368941Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.156714555s" Apr 17 23:34:35.389008 containerd[2021]: time="2026-04-17T23:34:35.388450577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 17 23:34:35.389477 containerd[2021]: time="2026-04-17T23:34:35.389413553Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:34:36.904390 containerd[2021]: time="2026-04-17T23:34:36.904325577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:36.908179 containerd[2021]: time="2026-04-17T23:34:36.908126781Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 17 23:34:36.908724 containerd[2021]: time="2026-04-17T23:34:36.908683377Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:36.915925 containerd[2021]: time="2026-04-17T23:34:36.915853377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:36.918928 containerd[2021]: time="2026-04-17T23:34:36.918848229Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.52935838s" Apr 17 23:34:36.918928 containerd[2021]: time="2026-04-17T23:34:36.918923481Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 17 23:34:36.922921 containerd[2021]: time="2026-04-17T23:34:36.922853037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:34:38.231849 containerd[2021]: time="2026-04-17T23:34:38.231528583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:38.233833 containerd[2021]: time="2026-04-17T23:34:38.233742463Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 17 23:34:38.235308 containerd[2021]: time="2026-04-17T23:34:38.235234483Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:38.241816 containerd[2021]: time="2026-04-17T23:34:38.240987607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:38.243541 containerd[2021]: time="2026-04-17T23:34:38.243491539Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.320567786s" Apr 17 23:34:38.243823 containerd[2021]: time="2026-04-17T23:34:38.243666103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 17 23:34:38.245236 containerd[2021]: time="2026-04-17T23:34:38.245187151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:34:39.626051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668962563.mount: Deactivated successfully. Apr 17 23:34:40.208177 containerd[2021]: time="2026-04-17T23:34:40.208110777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:40.210157 containerd[2021]: time="2026-04-17T23:34:40.210111585Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 17 23:34:40.210577 containerd[2021]: time="2026-04-17T23:34:40.210540657Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:40.214294 containerd[2021]: time="2026-04-17T23:34:40.214241049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:40.215876 containerd[2021]: time="2026-04-17T23:34:40.215779533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.97052949s" Apr 17 23:34:40.216012 containerd[2021]: time="2026-04-17T23:34:40.215875389Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 17 23:34:40.217391 containerd[2021]: time="2026-04-17T23:34:40.217327365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:34:40.366539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:34:40.375154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:40.704055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:40.725565 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:34:40.803653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387300537.mount: Deactivated successfully. Apr 17 23:34:40.840216 kubelet[2591]: E0417 23:34:40.840052 2591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:34:40.846053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:34:40.847428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:34:42.017768 containerd[2021]: time="2026-04-17T23:34:42.017683594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:42.020278 containerd[2021]: time="2026-04-17T23:34:42.020029546Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 17 23:34:42.023339 containerd[2021]: time="2026-04-17T23:34:42.022277410Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:42.029011 containerd[2021]: time="2026-04-17T23:34:42.028947454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:42.031604 containerd[2021]: time="2026-04-17T23:34:42.031553362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.814164377s" Apr 17 23:34:42.031763 containerd[2021]: time="2026-04-17T23:34:42.031734082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 17 23:34:42.032547 containerd[2021]: time="2026-04-17T23:34:42.032469250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:34:42.550768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998385410.mount: Deactivated successfully. Apr 17 23:34:42.563475 containerd[2021]: time="2026-04-17T23:34:42.563391793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:42.565695 containerd[2021]: time="2026-04-17T23:34:42.565241701Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 17 23:34:42.569188 containerd[2021]: time="2026-04-17T23:34:42.567887053Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:42.574824 containerd[2021]: time="2026-04-17T23:34:42.572995765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:42.574824 containerd[2021]: time="2026-04-17T23:34:42.574621561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 542.091795ms" Apr 17 23:34:42.574824 containerd[2021]: time="2026-04-17T23:34:42.574667089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 17 23:34:42.576472 containerd[2021]: time="2026-04-17T23:34:42.576432385Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:34:43.142609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660123556.mount: Deactivated successfully. Apr 17 23:34:44.618001 containerd[2021]: time="2026-04-17T23:34:44.617929551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:44.620206 containerd[2021]: time="2026-04-17T23:34:44.620153631Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 17 23:34:44.621446 containerd[2021]: time="2026-04-17T23:34:44.620882535Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:44.628814 containerd[2021]: time="2026-04-17T23:34:44.627046767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:44.629932 containerd[2021]: time="2026-04-17T23:34:44.629880483Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.05326295s" Apr 17 23:34:44.630069 containerd[2021]: time="2026-04-17T23:34:44.630037959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 17 23:34:46.660357 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:34:50.866751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 23:34:50.877232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:51.251291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:51.261956 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:34:51.349977 kubelet[2750]: E0417 23:34:51.349908 2750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:34:51.355430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:34:51.356983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:34:52.560023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:52.575308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:52.638410 systemd[1]: Reloading requested from client PID 2764 ('systemctl') (unit session-7.scope)... Apr 17 23:34:52.638449 systemd[1]: Reloading... Apr 17 23:34:52.885831 zram_generator::config[2807]: No configuration found. Apr 17 23:34:53.136765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:34:53.324899 systemd[1]: Reloading finished in 685 ms. Apr 17 23:34:53.418531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:53.429384 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:53.430645 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:34:53.431902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:53.443737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:53.772743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:53.796375 (kubelet)[2869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:34:53.868386 kubelet[2869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:34:53.868386 kubelet[2869]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:34:53.868386 kubelet[2869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:34:53.869000 kubelet[2869]: I0417 23:34:53.868453 2869 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:34:54.970264 kubelet[2869]: I0417 23:34:54.970208 2869 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:34:54.970264 kubelet[2869]: I0417 23:34:54.970259 2869 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:34:54.970946 kubelet[2869]: I0417 23:34:54.970684 2869 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:34:55.017842 kubelet[2869]: E0417 23:34:55.017556 2869 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.27.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:34:55.018261 kubelet[2869]: I0417 23:34:55.018211 2869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:34:55.030108 kubelet[2869]: E0417 23:34:55.030048 2869 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:34:55.030108 kubelet[2869]: I0417 23:34:55.030107 2869 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:34:55.037113 kubelet[2869]: I0417 23:34:55.037050 2869 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:34:55.040865 kubelet[2869]: I0417 23:34:55.040774 2869 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:34:55.041145 kubelet[2869]: I0417 23:34:55.040854 2869 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-239","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:34:55.041307 kubelet[2869]: I0417 23:34:55.041146 2869 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:34:55.041307 kubelet[2869]: I0417 23:34:55.041167 2869 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:34:55.041559 kubelet[2869]: I0417 23:34:55.041516 2869 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:34:55.047513 kubelet[2869]: I0417 23:34:55.047458 2869 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:34:55.047755 kubelet[2869]: I0417 23:34:55.047713 2869 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:34:55.047858 kubelet[2869]: I0417 23:34:55.047777 2869 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:34:55.047858 kubelet[2869]: I0417 23:34:55.047827 2869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:34:55.051751 kubelet[2869]: E0417 23:34:55.051678 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-239&limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:34:55.054373 kubelet[2869]: E0417 23:34:55.054278 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:34:55.054660 kubelet[2869]: I0417 23:34:55.054632 2869 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:34:55.055912 kubelet[2869]: I0417 23:34:55.055881 2869 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:34:55.056295 kubelet[2869]: W0417 23:34:55.056274 2869 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:34:55.062508 kubelet[2869]: I0417 23:34:55.062476 2869 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:34:55.062766 kubelet[2869]: I0417 23:34:55.062745 2869 server.go:1289] "Started kubelet" Apr 17 23:34:55.071883 kubelet[2869]: I0417 23:34:55.071843 2869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:34:55.078757 kubelet[2869]: E0417 23:34:55.076504 2869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.239:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.239:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-239.18a7490c44be319b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-239,UID:ip-172-31-27-239,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-239,},FirstTimestamp:2026-04-17 23:34:55.062700443 +0000 UTC m=+1.255222579,LastTimestamp:2026-04-17 23:34:55.062700443 +0000 UTC m=+1.255222579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-239,}" Apr 17 23:34:55.083632 kubelet[2869]: I0417 23:34:55.083568 2869 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:34:55.084081 kubelet[2869]: E0417 23:34:55.084026 2869 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-239\" not found" Apr 17 23:34:55.084654 kubelet[2869]: I0417 23:34:55.084615 2869 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:34:55.084910 kubelet[2869]: I0417 23:34:55.084750 2869 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:34:55.085555 kubelet[2869]: E0417 23:34:55.085507 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:34:55.085717 kubelet[2869]: E0417 23:34:55.085663 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-239?timeout=10s\": dial tcp 172.31.27.239:6443: connect: connection refused" interval="200ms" Apr 17 23:34:55.086371 kubelet[2869]: I0417 23:34:55.086333 2869 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:34:55.086553 kubelet[2869]: I0417 23:34:55.086518 2869 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:34:55.089341 kubelet[2869]: I0417 23:34:55.088898 2869 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:34:55.090634 kubelet[2869]: I0417 23:34:55.090538 2869 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:34:55.097737 kubelet[2869]: I0417 23:34:55.097640 2869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:34:55.098214 kubelet[2869]: I0417 23:34:55.098152 2869 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:34:55.098583 kubelet[2869]: I0417 23:34:55.098531 2869 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:34:55.102334 kubelet[2869]: I0417 23:34:55.102281 2869 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:34:55.119119 kubelet[2869]: I0417 23:34:55.119052 2869 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:34:55.122587 kubelet[2869]: I0417 23:34:55.122517 2869 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:34:55.122587 kubelet[2869]: I0417 23:34:55.122582 2869 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:34:55.122772 kubelet[2869]: I0417 23:34:55.122621 2869 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:34:55.122772 kubelet[2869]: I0417 23:34:55.122637 2869 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:34:55.122772 kubelet[2869]: E0417 23:34:55.122705 2869 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:34:55.134841 kubelet[2869]: E0417 23:34:55.134349 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:34:55.142613 kubelet[2869]: I0417 23:34:55.142581 2869 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:34:55.142873 kubelet[2869]: I0417 23:34:55.142842 2869 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:34:55.142993 kubelet[2869]: I0417 23:34:55.142975 2869 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:34:55.149114 kubelet[2869]: I0417 23:34:55.149059 2869 policy_none.go:49] "None policy: Start" Apr 17 23:34:55.149114 kubelet[2869]: I0417 23:34:55.149105 2869 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:34:55.149303 kubelet[2869]: I0417 23:34:55.149131 2869 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:34:55.161404 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:34:55.181446 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:34:55.184731 kubelet[2869]: E0417 23:34:55.184171 2869 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-239\" not found" Apr 17 23:34:55.190291 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:34:55.200646 kubelet[2869]: E0417 23:34:55.199756 2869 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:34:55.200646 kubelet[2869]: I0417 23:34:55.200055 2869 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:34:55.200646 kubelet[2869]: I0417 23:34:55.200074 2869 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:34:55.200646 kubelet[2869]: I0417 23:34:55.200550 2869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:34:55.203623 kubelet[2869]: E0417 23:34:55.203585 2869 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:34:55.204408 kubelet[2869]: E0417 23:34:55.204345 2869 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-239\" not found" Apr 17 23:34:55.246419 systemd[1]: Created slice kubepods-burstable-poddd08bad66b468c8bdd453db18cbd65fe.slice - libcontainer container kubepods-burstable-poddd08bad66b468c8bdd453db18cbd65fe.slice. Apr 17 23:34:55.263873 kubelet[2869]: E0417 23:34:55.263210 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:55.268694 systemd[1]: Created slice kubepods-burstable-podc959f2d0b009d421fa800b73f856dac6.slice - libcontainer container kubepods-burstable-podc959f2d0b009d421fa800b73f856dac6.slice. Apr 17 23:34:55.286263 kubelet[2869]: E0417 23:34:55.286013 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:55.286912 kubelet[2869]: E0417 23:34:55.286836 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-239?timeout=10s\": dial tcp 172.31.27.239:6443: connect: connection refused" interval="400ms" Apr 17 23:34:55.291449 systemd[1]: Created slice kubepods-burstable-pod343cb6665c04040624e9480b0b106705.slice - libcontainer container kubepods-burstable-pod343cb6665c04040624e9480b0b106705.slice. Apr 17 23:34:55.296853 kubelet[2869]: E0417 23:34:55.296677 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:55.302006 kubelet[2869]: I0417 23:34:55.301960 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-239" Apr 17 23:34:55.302514 kubelet[2869]: E0417 23:34:55.302467 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.239:6443/api/v1/nodes\": dial tcp 172.31.27.239:6443: connect: connection refused" node="ip-172-31-27-239" Apr 17 23:34:55.385475 kubelet[2869]: I0417 23:34:55.385403 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:34:55.385475 kubelet[2869]: I0417 23:34:55.385476 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:34:55.385672 kubelet[2869]: I0417 23:34:55.385517 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:34:55.385672 kubelet[2869]: I0417 23:34:55.385559 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/343cb6665c04040624e9480b0b106705-ca-certs\") pod \"kube-apiserver-ip-172-31-27-239\" (UID: \"343cb6665c04040624e9480b0b106705\") " pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:34:55.385672 kubelet[2869]: I0417 23:34:55.385594 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/343cb6665c04040624e9480b0b106705-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-239\" (UID: \"343cb6665c04040624e9480b0b106705\") " pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:34:55.385672 kubelet[2869]: I0417 23:34:55.385629 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:34:55.385920 kubelet[2869]: I0417 23:34:55.385670 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:34:55.385920 kubelet[2869]: I0417 23:34:55.385706 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c959f2d0b009d421fa800b73f856dac6-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-239\" (UID: \"c959f2d0b009d421fa800b73f856dac6\") " pod="kube-system/kube-scheduler-ip-172-31-27-239" Apr 17 23:34:55.385920 kubelet[2869]: I0417 23:34:55.385743 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/343cb6665c04040624e9480b0b106705-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-239\" (UID: \"343cb6665c04040624e9480b0b106705\") " pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:34:55.504897 kubelet[2869]: I0417 23:34:55.504735 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-239" Apr 17 23:34:55.505826 kubelet[2869]: E0417 23:34:55.505291 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.239:6443/api/v1/nodes\": dial tcp 172.31.27.239:6443: connect: connection refused" node="ip-172-31-27-239" Apr 17 23:34:55.565035 containerd[2021]: time="2026-04-17T23:34:55.564958141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-239,Uid:dd08bad66b468c8bdd453db18cbd65fe,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:55.588181 containerd[2021]: time="2026-04-17T23:34:55.587774930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-239,Uid:c959f2d0b009d421fa800b73f856dac6,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:55.601835 containerd[2021]: time="2026-04-17T23:34:55.601756850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-239,Uid:343cb6665c04040624e9480b0b106705,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:55.688005 kubelet[2869]: E0417 23:34:55.687938 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-239?timeout=10s\": dial tcp 172.31.27.239:6443: connect: connection refused" interval="800ms" Apr 17 23:34:55.908057 kubelet[2869]: I0417 23:34:55.907996 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-239" Apr 17 23:34:55.908530 kubelet[2869]: E0417 23:34:55.908484 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.239:6443/api/v1/nodes\": dial tcp 172.31.27.239:6443: connect: connection refused" node="ip-172-31-27-239" Apr 17 23:34:55.972584 kubelet[2869]: E0417 23:34:55.972520 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:34:56.102548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733017532.mount: Deactivated successfully. Apr 17 23:34:56.118858 containerd[2021]: time="2026-04-17T23:34:56.118212120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:56.123395 containerd[2021]: time="2026-04-17T23:34:56.123324600Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:34:56.126319 containerd[2021]: time="2026-04-17T23:34:56.126248772Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:56.129125 containerd[2021]: time="2026-04-17T23:34:56.129028008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:56.130498 containerd[2021]: time="2026-04-17T23:34:56.130436616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 17 23:34:56.134065 containerd[2021]: time="2026-04-17T23:34:56.133975524Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:56.137155 containerd[2021]: time="2026-04-17T23:34:56.137090124Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:34:56.143817 containerd[2021]: time="2026-04-17T23:34:56.143038980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:56.144597 containerd[2021]: time="2026-04-17T23:34:56.144547296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.473043ms" Apr 17 23:34:56.148240 containerd[2021]: time="2026-04-17T23:34:56.148160568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.238002ms" Apr 17 23:34:56.164857 containerd[2021]: time="2026-04-17T23:34:56.164675076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.788974ms" Apr 17 23:34:56.376109 containerd[2021]: time="2026-04-17T23:34:56.375924085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:56.376300 containerd[2021]: time="2026-04-17T23:34:56.376136449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:56.376300 containerd[2021]: time="2026-04-17T23:34:56.376243945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:56.376701 containerd[2021]: time="2026-04-17T23:34:56.376606765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:56.378075 containerd[2021]: time="2026-04-17T23:34:56.377777185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:56.378075 containerd[2021]: time="2026-04-17T23:34:56.377935801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:56.379822 containerd[2021]: time="2026-04-17T23:34:56.377973913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:56.379822 containerd[2021]: time="2026-04-17T23:34:56.379060285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:56.387449 containerd[2021]: time="2026-04-17T23:34:56.387168889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:56.387905 containerd[2021]: time="2026-04-17T23:34:56.387810097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:56.388242 containerd[2021]: time="2026-04-17T23:34:56.388160521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:56.389270 kubelet[2869]: E0417 23:34:56.389211 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:34:56.390852 containerd[2021]: time="2026-04-17T23:34:56.390649766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:56.400932 kubelet[2869]: E0417 23:34:56.400647 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-239&limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:34:56.441946 systemd[1]: Started cri-containerd-30a7ed62a59cefc6d75d347d81a756e6ad7545f6c60a4fd7d38997a1d64c357c.scope - libcontainer container 30a7ed62a59cefc6d75d347d81a756e6ad7545f6c60a4fd7d38997a1d64c357c. Apr 17 23:34:56.451421 systemd[1]: Started cri-containerd-e8e2e3adc609ba51f3d661b109452f5ce13520a125fc57235d7ed030c51565ac.scope - libcontainer container e8e2e3adc609ba51f3d661b109452f5ce13520a125fc57235d7ed030c51565ac. Apr 17 23:34:56.467182 kubelet[2869]: E0417 23:34:56.466892 2869 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:34:56.471147 systemd[1]: Started cri-containerd-74060ee81740f6cf9c731fcb8276900a69241d10fca14f4d03c14fb0e6b70411.scope - libcontainer container 74060ee81740f6cf9c731fcb8276900a69241d10fca14f4d03c14fb0e6b70411. Apr 17 23:34:56.489636 kubelet[2869]: E0417 23:34:56.489556 2869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-239?timeout=10s\": dial tcp 172.31.27.239:6443: connect: connection refused" interval="1.6s" Apr 17 23:34:56.556141 containerd[2021]: time="2026-04-17T23:34:56.556063838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-239,Uid:c959f2d0b009d421fa800b73f856dac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"74060ee81740f6cf9c731fcb8276900a69241d10fca14f4d03c14fb0e6b70411\"" Apr 17 23:34:56.572142 containerd[2021]: time="2026-04-17T23:34:56.571977410Z" level=info msg="CreateContainer within sandbox \"74060ee81740f6cf9c731fcb8276900a69241d10fca14f4d03c14fb0e6b70411\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:34:56.603039 containerd[2021]: time="2026-04-17T23:34:56.602406495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-239,Uid:dd08bad66b468c8bdd453db18cbd65fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8e2e3adc609ba51f3d661b109452f5ce13520a125fc57235d7ed030c51565ac\"" Apr 17 23:34:56.608577 containerd[2021]: time="2026-04-17T23:34:56.608325195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-239,Uid:343cb6665c04040624e9480b0b106705,Namespace:kube-system,Attempt:0,} returns sandbox id \"30a7ed62a59cefc6d75d347d81a756e6ad7545f6c60a4fd7d38997a1d64c357c\"" Apr 17 23:34:56.619278 containerd[2021]: time="2026-04-17T23:34:56.619218255Z" level=info msg="CreateContainer within sandbox \"e8e2e3adc609ba51f3d661b109452f5ce13520a125fc57235d7ed030c51565ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:34:56.623173 containerd[2021]: time="2026-04-17T23:34:56.622563627Z" level=info msg="CreateContainer within sandbox \"74060ee81740f6cf9c731fcb8276900a69241d10fca14f4d03c14fb0e6b70411\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0\"" Apr 17 23:34:56.623173 containerd[2021]: time="2026-04-17T23:34:56.623016855Z" level=info msg="CreateContainer within sandbox \"30a7ed62a59cefc6d75d347d81a756e6ad7545f6c60a4fd7d38997a1d64c357c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:34:56.624189 containerd[2021]: time="2026-04-17T23:34:56.624148251Z" level=info msg="StartContainer for \"4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0\"" Apr 17 23:34:56.654165 containerd[2021]: time="2026-04-17T23:34:56.654091971Z" level=info msg="CreateContainer within sandbox \"e8e2e3adc609ba51f3d661b109452f5ce13520a125fc57235d7ed030c51565ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa\"" Apr 17 23:34:56.655361 containerd[2021]: time="2026-04-17T23:34:56.655311519Z" level=info msg="StartContainer for \"de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa\"" Apr 17 23:34:56.684539 containerd[2021]: time="2026-04-17T23:34:56.684479055Z" level=info msg="CreateContainer within sandbox \"30a7ed62a59cefc6d75d347d81a756e6ad7545f6c60a4fd7d38997a1d64c357c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc10c246c9803e2108c4134e618bf0501273faafee07f08fa50fede00d94cbd6\"" Apr 17 23:34:56.684529 systemd[1]: Started cri-containerd-4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0.scope - libcontainer container 4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0. Apr 17 23:34:56.687230 containerd[2021]: time="2026-04-17T23:34:56.686995011Z" level=info msg="StartContainer for \"dc10c246c9803e2108c4134e618bf0501273faafee07f08fa50fede00d94cbd6\"" Apr 17 23:34:56.713911 kubelet[2869]: I0417 23:34:56.711739 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-239" Apr 17 23:34:56.713911 kubelet[2869]: E0417 23:34:56.712882 2869 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.239:6443/api/v1/nodes\": dial tcp 172.31.27.239:6443: connect: connection refused" node="ip-172-31-27-239" Apr 17 23:34:56.728344 systemd[1]: Started cri-containerd-de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa.scope - libcontainer container de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa. Apr 17 23:34:56.771576 systemd[1]: Started cri-containerd-dc10c246c9803e2108c4134e618bf0501273faafee07f08fa50fede00d94cbd6.scope - libcontainer container dc10c246c9803e2108c4134e618bf0501273faafee07f08fa50fede00d94cbd6. Apr 17 23:34:56.820656 containerd[2021]: time="2026-04-17T23:34:56.820422856Z" level=info msg="StartContainer for \"4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0\" returns successfully" Apr 17 23:34:56.864710 containerd[2021]: time="2026-04-17T23:34:56.864583948Z" level=info msg="StartContainer for \"de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa\" returns successfully" Apr 17 23:34:56.912266 containerd[2021]: time="2026-04-17T23:34:56.912091420Z" level=info msg="StartContainer for \"dc10c246c9803e2108c4134e618bf0501273faafee07f08fa50fede00d94cbd6\" returns successfully" Apr 17 23:34:57.151705 kubelet[2869]: E0417 23:34:57.151585 2869 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.27.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.239:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:34:57.158970 kubelet[2869]: E0417 23:34:57.158907 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:57.166043 kubelet[2869]: E0417 23:34:57.165993 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:57.171945 kubelet[2869]: E0417 23:34:57.171890 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:58.174429 kubelet[2869]: E0417 23:34:58.174375 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:58.176179 kubelet[2869]: E0417 23:34:58.176027 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:58.315100 kubelet[2869]: I0417 23:34:58.315035 2869 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-239" Apr 17 23:34:59.199530 kubelet[2869]: E0417 23:34:59.199089 2869 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-239\" not found" node="ip-172-31-27-239" Apr 17 23:34:59.932826 kubelet[2869]: I0417 23:34:59.932314 2869 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-239" Apr 17 23:34:59.985826 kubelet[2869]: I0417 23:34:59.985130 2869 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-239" Apr 17 23:34:59.996367 kubelet[2869]: E0417 23:34:59.996297 2869 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-239\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-239" Apr 17 23:34:59.996367 kubelet[2869]: I0417 23:34:59.996347 2869 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:35:00.001362 kubelet[2869]: E0417 23:35:00.001292 2869 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-239\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:35:00.001362 kubelet[2869]: I0417 23:35:00.001339 2869 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:00.004261 kubelet[2869]: E0417 23:35:00.004191 2869 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-27-239\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:00.057737 kubelet[2869]: I0417 23:35:00.057666 2869 apiserver.go:52] "Watching apiserver" Apr 17 23:35:00.085055 kubelet[2869]: I0417 23:35:00.084978 2869 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:35:01.217937 update_engine[1999]: I20260417 23:35:01.217840 1999 update_attempter.cc:509] Updating boot flags... Apr 17 23:35:01.385399 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3166) Apr 17 23:35:01.765846 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3168) Apr 17 23:35:02.284688 systemd[1]: Reloading requested from client PID 3335 ('systemctl') (unit session-7.scope)... Apr 17 23:35:02.284721 systemd[1]: Reloading... Apr 17 23:35:02.435835 zram_generator::config[3375]: No configuration found. Apr 17 23:35:02.696681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:35:02.927927 systemd[1]: Reloading finished in 642 ms. Apr 17 23:35:03.012710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:03.020090 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:35:03.021866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:03.021940 systemd[1]: kubelet.service: Consumed 2.046s CPU time, 128.3M memory peak, 0B memory swap peak. Apr 17 23:35:03.030393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:35:03.404397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:35:03.420542 (kubelet)[3435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:35:03.547725 kubelet[3435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:35:03.547725 kubelet[3435]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:35:03.547725 kubelet[3435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:35:03.551064 kubelet[3435]: I0417 23:35:03.549907 3435 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:35:03.572888 kubelet[3435]: I0417 23:35:03.572266 3435 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:35:03.573037 kubelet[3435]: I0417 23:35:03.572883 3435 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:35:03.573842 kubelet[3435]: I0417 23:35:03.573392 3435 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:35:03.577863 kubelet[3435]: I0417 23:35:03.577771 3435 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:35:03.585540 kubelet[3435]: I0417 23:35:03.585141 3435 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:35:03.602310 kubelet[3435]: E0417 23:35:03.601702 3435 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:35:03.602310 kubelet[3435]: I0417 23:35:03.601992 3435 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:35:03.618244 kubelet[3435]: I0417 23:35:03.618043 3435 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:35:03.619834 kubelet[3435]: I0417 23:35:03.619182 3435 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:35:03.619834 kubelet[3435]: I0417 23:35:03.619237 3435 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-239","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:35:03.619834 kubelet[3435]: I0417 23:35:03.619682 3435 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:35:03.619834 kubelet[3435]: I0417 23:35:03.619703 3435 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:35:03.622249 kubelet[3435]: I0417 23:35:03.622195 3435 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:35:03.624824 kubelet[3435]: I0417 23:35:03.622961 3435 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:35:03.624824 kubelet[3435]: I0417 23:35:03.622993 3435 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:35:03.624824 kubelet[3435]: I0417 23:35:03.623044 3435 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:35:03.624824 kubelet[3435]: I0417 23:35:03.623074 3435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:35:03.646577 kubelet[3435]: I0417 23:35:03.646364 3435 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:35:03.650772 kubelet[3435]: I0417 23:35:03.650441 3435 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:35:03.661988 kubelet[3435]: I0417 23:35:03.660115 3435 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:35:03.661988 kubelet[3435]: I0417 23:35:03.660208 3435 server.go:1289] "Started kubelet" Apr 17 23:35:03.672129 kubelet[3435]: I0417 23:35:03.668401 3435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:35:03.682391 kubelet[3435]: I0417 23:35:03.681656 3435 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:35:03.684519 kubelet[3435]: I0417 23:35:03.683908 3435 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:35:03.684519 kubelet[3435]: E0417 23:35:03.684271 3435 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-239\" not found" Apr 17 23:35:03.684840 kubelet[3435]: I0417 23:35:03.684779 3435 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:35:03.685092 kubelet[3435]: I0417 23:35:03.685064 3435 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:35:03.689457 kubelet[3435]: I0417 23:35:03.689403 3435 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:35:03.694776 kubelet[3435]: I0417 23:35:03.693997 3435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:35:03.694776 kubelet[3435]: I0417 23:35:03.694524 3435 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:35:03.704896 kubelet[3435]: I0417 23:35:03.704846 3435 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:35:03.743875 kubelet[3435]: E0417 23:35:03.741081 3435 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:35:03.743875 kubelet[3435]: I0417 23:35:03.742352 3435 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:35:03.743875 kubelet[3435]: I0417 23:35:03.742377 3435 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:35:03.743875 kubelet[3435]: I0417 23:35:03.742539 3435 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:35:03.816037 kubelet[3435]: I0417 23:35:03.815972 3435 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:35:03.824430 kubelet[3435]: I0417 23:35:03.823639 3435 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:35:03.824430 kubelet[3435]: I0417 23:35:03.823686 3435 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:35:03.824430 kubelet[3435]: I0417 23:35:03.823748 3435 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:35:03.824430 kubelet[3435]: I0417 23:35:03.823763 3435 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:35:03.824826 kubelet[3435]: E0417 23:35:03.824513 3435 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:35:03.927073 kubelet[3435]: E0417 23:35:03.926898 3435 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:35:03.948878 kubelet[3435]: I0417 23:35:03.948754 3435 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:35:03.950016 kubelet[3435]: I0417 23:35:03.949959 3435 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:35:03.950118 kubelet[3435]: I0417 23:35:03.950022 3435 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:35:03.950281 kubelet[3435]: I0417 23:35:03.950251 3435 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:35:03.950338 kubelet[3435]: I0417 23:35:03.950282 3435 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:35:03.950338 kubelet[3435]: I0417 23:35:03.950322 3435 policy_none.go:49] "None policy: Start" Apr 17 23:35:03.950442 kubelet[3435]: I0417 23:35:03.950341 3435 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:35:03.950442 kubelet[3435]: I0417 23:35:03.950362 3435 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:35:03.952335 kubelet[3435]: I0417 23:35:03.952278 3435 state_mem.go:75] "Updated machine memory state" Apr 17 23:35:03.966674 kubelet[3435]: E0417 23:35:03.965442 3435 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:35:03.966674 kubelet[3435]: I0417 23:35:03.965714 3435 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:35:03.966674 kubelet[3435]: I0417 23:35:03.965733 3435 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:35:03.971469 kubelet[3435]: I0417 23:35:03.970460 3435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:35:03.985066 kubelet[3435]: E0417 23:35:03.985002 3435 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:35:04.091274 kubelet[3435]: I0417 23:35:04.091235 3435 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-239" Apr 17 23:35:04.115319 kubelet[3435]: I0417 23:35:04.115256 3435 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-27-239" Apr 17 23:35:04.115630 kubelet[3435]: I0417 23:35:04.115608 3435 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-239" Apr 17 23:35:04.129744 kubelet[3435]: I0417 23:35:04.129005 3435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:04.131834 kubelet[3435]: I0417 23:35:04.130823 3435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-239" Apr 17 23:35:04.132161 kubelet[3435]: I0417 23:35:04.132129 3435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:35:04.196339 kubelet[3435]: I0417 23:35:04.196075 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c959f2d0b009d421fa800b73f856dac6-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-239\" (UID: \"c959f2d0b009d421fa800b73f856dac6\") " pod="kube-system/kube-scheduler-ip-172-31-27-239" Apr 17 23:35:04.196339 kubelet[3435]: I0417 23:35:04.196252 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/343cb6665c04040624e9480b0b106705-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-239\" (UID: \"343cb6665c04040624e9480b0b106705\") " pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:35:04.198107 kubelet[3435]: I0417 23:35:04.197635 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/343cb6665c04040624e9480b0b106705-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-239\" (UID: \"343cb6665c04040624e9480b0b106705\") " pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:35:04.198107 kubelet[3435]: I0417 23:35:04.197938 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:04.198107 kubelet[3435]: I0417 23:35:04.198016 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:04.198107 kubelet[3435]: I0417 23:35:04.198102 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:04.198446 kubelet[3435]: I0417 23:35:04.198194 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:04.198446 kubelet[3435]: I0417 23:35:04.198287 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd08bad66b468c8bdd453db18cbd65fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-239\" (UID: \"dd08bad66b468c8bdd453db18cbd65fe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-239" Apr 17 23:35:04.198446 kubelet[3435]: I0417 23:35:04.198377 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/343cb6665c04040624e9480b0b106705-ca-certs\") pod \"kube-apiserver-ip-172-31-27-239\" (UID: \"343cb6665c04040624e9480b0b106705\") " pod="kube-system/kube-apiserver-ip-172-31-27-239" Apr 17 23:35:04.646558 kubelet[3435]: I0417 23:35:04.646504 3435 apiserver.go:52] "Watching apiserver" Apr 17 23:35:04.685519 kubelet[3435]: I0417 23:35:04.685423 3435 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:35:04.937199 kubelet[3435]: I0417 23:35:04.936950 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-239" podStartSLOduration=0.936764772 podStartE2EDuration="936.764772ms" podCreationTimestamp="2026-04-17 23:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:04.936592476 +0000 UTC m=+1.503131925" watchObservedRunningTime="2026-04-17 23:35:04.936764772 +0000 UTC m=+1.503304197" Apr 17 23:35:04.981815 kubelet[3435]: I0417 23:35:04.981221 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-239" podStartSLOduration=0.981198636 podStartE2EDuration="981.198636ms" podCreationTimestamp="2026-04-17 23:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:04.959140824 +0000 UTC m=+1.525680273" watchObservedRunningTime="2026-04-17 23:35:04.981198636 +0000 UTC m=+1.547738073" Apr 17 23:35:05.004821 kubelet[3435]: I0417 23:35:05.003412 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-239" podStartSLOduration=1.00339074 podStartE2EDuration="1.00339074s" podCreationTimestamp="2026-04-17 23:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:04.982023216 +0000 UTC m=+1.548562665" watchObservedRunningTime="2026-04-17 23:35:05.00339074 +0000 UTC m=+1.569930165" Apr 17 23:35:08.227644 kubelet[3435]: I0417 23:35:08.227590 3435 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:35:08.228913 containerd[2021]: time="2026-04-17T23:35:08.228689340Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:35:08.229627 kubelet[3435]: I0417 23:35:08.229042 3435 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:35:09.117741 systemd[1]: Created slice kubepods-besteffort-pod39032a8f_aa02_4de1_86c8_cd938cfcfe55.slice - libcontainer container kubepods-besteffort-pod39032a8f_aa02_4de1_86c8_cd938cfcfe55.slice. Apr 17 23:35:09.133818 kubelet[3435]: I0417 23:35:09.132234 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kjzb\" (UniqueName: \"kubernetes.io/projected/39032a8f-aa02-4de1-86c8-cd938cfcfe55-kube-api-access-9kjzb\") pod \"kube-proxy-58tv6\" (UID: \"39032a8f-aa02-4de1-86c8-cd938cfcfe55\") " pod="kube-system/kube-proxy-58tv6" Apr 17 23:35:09.133818 kubelet[3435]: I0417 23:35:09.132484 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39032a8f-aa02-4de1-86c8-cd938cfcfe55-lib-modules\") pod \"kube-proxy-58tv6\" (UID: \"39032a8f-aa02-4de1-86c8-cd938cfcfe55\") " pod="kube-system/kube-proxy-58tv6" Apr 17 23:35:09.133818 kubelet[3435]: I0417 23:35:09.132650 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39032a8f-aa02-4de1-86c8-cd938cfcfe55-kube-proxy\") pod \"kube-proxy-58tv6\" (UID: \"39032a8f-aa02-4de1-86c8-cd938cfcfe55\") " pod="kube-system/kube-proxy-58tv6" Apr 17 23:35:09.133818 kubelet[3435]: I0417 23:35:09.132688 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39032a8f-aa02-4de1-86c8-cd938cfcfe55-xtables-lock\") pod \"kube-proxy-58tv6\" (UID: \"39032a8f-aa02-4de1-86c8-cd938cfcfe55\") " pod="kube-system/kube-proxy-58tv6" Apr 17 23:35:09.432968 containerd[2021]: time="2026-04-17T23:35:09.432807134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58tv6,Uid:39032a8f-aa02-4de1-86c8-cd938cfcfe55,Namespace:kube-system,Attempt:0,}" Apr 17 23:35:09.481450 systemd[1]: Created slice kubepods-besteffort-pod3e61b9aa_8eca_40c6_9529_8b5a7b4da676.slice - libcontainer container kubepods-besteffort-pod3e61b9aa_8eca_40c6_9529_8b5a7b4da676.slice. Apr 17 23:35:09.506086 containerd[2021]: time="2026-04-17T23:35:09.505510599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:09.506086 containerd[2021]: time="2026-04-17T23:35:09.505603875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:09.506086 containerd[2021]: time="2026-04-17T23:35:09.505658247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:09.506086 containerd[2021]: time="2026-04-17T23:35:09.505838679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:09.535392 kubelet[3435]: I0417 23:35:09.535310 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e61b9aa-8eca-40c6-9529-8b5a7b4da676-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-g58mf\" (UID: \"3e61b9aa-8eca-40c6-9529-8b5a7b4da676\") " pod="tigera-operator/tigera-operator-6bf85f8dd-g58mf" Apr 17 23:35:09.535392 kubelet[3435]: I0417 23:35:09.535393 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzfz\" (UniqueName: \"kubernetes.io/projected/3e61b9aa-8eca-40c6-9529-8b5a7b4da676-kube-api-access-ctzfz\") pod \"tigera-operator-6bf85f8dd-g58mf\" (UID: \"3e61b9aa-8eca-40c6-9529-8b5a7b4da676\") " pod="tigera-operator/tigera-operator-6bf85f8dd-g58mf" Apr 17 23:35:09.546137 systemd[1]: Started cri-containerd-b7d8135a010bb08acafd9870d1493f223379ec5bfe4b35e0648dcda6067e4c22.scope - libcontainer container b7d8135a010bb08acafd9870d1493f223379ec5bfe4b35e0648dcda6067e4c22. Apr 17 23:35:09.591573 containerd[2021]: time="2026-04-17T23:35:09.591500703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58tv6,Uid:39032a8f-aa02-4de1-86c8-cd938cfcfe55,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d8135a010bb08acafd9870d1493f223379ec5bfe4b35e0648dcda6067e4c22\"" Apr 17 23:35:09.602898 containerd[2021]: time="2026-04-17T23:35:09.602834955Z" level=info msg="CreateContainer within sandbox \"b7d8135a010bb08acafd9870d1493f223379ec5bfe4b35e0648dcda6067e4c22\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:35:09.622589 containerd[2021]: time="2026-04-17T23:35:09.622426503Z" level=info msg="CreateContainer within sandbox \"b7d8135a010bb08acafd9870d1493f223379ec5bfe4b35e0648dcda6067e4c22\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"037bed821aecbd85d5b80a96d90d3477cef94dac4ab00050edb0e628c640fbf8\"" Apr 17 23:35:09.624195 containerd[2021]: time="2026-04-17T23:35:09.623872395Z" level=info msg="StartContainer for \"037bed821aecbd85d5b80a96d90d3477cef94dac4ab00050edb0e628c640fbf8\"" Apr 17 23:35:09.679155 systemd[1]: Started cri-containerd-037bed821aecbd85d5b80a96d90d3477cef94dac4ab00050edb0e628c640fbf8.scope - libcontainer container 037bed821aecbd85d5b80a96d90d3477cef94dac4ab00050edb0e628c640fbf8. Apr 17 23:35:09.739406 containerd[2021]: time="2026-04-17T23:35:09.739238332Z" level=info msg="StartContainer for \"037bed821aecbd85d5b80a96d90d3477cef94dac4ab00050edb0e628c640fbf8\" returns successfully" Apr 17 23:35:09.794235 containerd[2021]: time="2026-04-17T23:35:09.794111428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-g58mf,Uid:3e61b9aa-8eca-40c6-9529-8b5a7b4da676,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:35:09.845211 containerd[2021]: time="2026-04-17T23:35:09.845060404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:09.845367 containerd[2021]: time="2026-04-17T23:35:09.845246164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:09.845427 containerd[2021]: time="2026-04-17T23:35:09.845360020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:09.845924 containerd[2021]: time="2026-04-17T23:35:09.845763976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:09.881225 systemd[1]: Started cri-containerd-4241a73979e70679144f8a0bd68ad73fd387375b071c44adaab6caed8d51d9fd.scope - libcontainer container 4241a73979e70679144f8a0bd68ad73fd387375b071c44adaab6caed8d51d9fd. Apr 17 23:35:10.004362 containerd[2021]: time="2026-04-17T23:35:10.004208725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-g58mf,Uid:3e61b9aa-8eca-40c6-9529-8b5a7b4da676,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4241a73979e70679144f8a0bd68ad73fd387375b071c44adaab6caed8d51d9fd\"" Apr 17 23:35:10.010445 containerd[2021]: time="2026-04-17T23:35:10.009018577Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:35:10.270223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268674708.mount: Deactivated successfully. Apr 17 23:35:11.408468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741886348.mount: Deactivated successfully. Apr 17 23:35:17.459584 containerd[2021]: time="2026-04-17T23:35:17.459517366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:17.461524 containerd[2021]: time="2026-04-17T23:35:17.461184886Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 17 23:35:17.463762 containerd[2021]: time="2026-04-17T23:35:17.462758158Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:17.469981 containerd[2021]: time="2026-04-17T23:35:17.469926226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:17.472083 containerd[2021]: time="2026-04-17T23:35:17.472009486Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 7.462906897s" Apr 17 23:35:17.472083 containerd[2021]: time="2026-04-17T23:35:17.472077166Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 17 23:35:17.480259 containerd[2021]: time="2026-04-17T23:35:17.480046930Z" level=info msg="CreateContainer within sandbox \"4241a73979e70679144f8a0bd68ad73fd387375b071c44adaab6caed8d51d9fd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:35:17.509104 containerd[2021]: time="2026-04-17T23:35:17.509040010Z" level=info msg="CreateContainer within sandbox \"4241a73979e70679144f8a0bd68ad73fd387375b071c44adaab6caed8d51d9fd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a\"" Apr 17 23:35:17.511308 containerd[2021]: time="2026-04-17T23:35:17.511150870Z" level=info msg="StartContainer for \"ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a\"" Apr 17 23:35:17.570128 systemd[1]: Started cri-containerd-ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a.scope - libcontainer container ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a. Apr 17 23:35:17.621309 containerd[2021]: time="2026-04-17T23:35:17.621237395Z" level=info msg="StartContainer for \"ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a\" returns successfully" Apr 17 23:35:17.950316 kubelet[3435]: I0417 23:35:17.949994 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-58tv6" podStartSLOduration=8.949973005 podStartE2EDuration="8.949973005s" podCreationTimestamp="2026-04-17 23:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:35:09.938028269 +0000 UTC m=+6.504567718" watchObservedRunningTime="2026-04-17 23:35:17.949973005 +0000 UTC m=+14.516512442" Apr 17 23:35:26.105405 sudo[2354]: pam_unix(sudo:session): session closed for user root Apr 17 23:35:26.267136 sshd[2336]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:26.279385 systemd[1]: sshd@6-172.31.27.239:22-4.175.71.9:35256.service: Deactivated successfully. Apr 17 23:35:26.287569 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:35:26.288207 systemd[1]: session-7.scope: Consumed 11.874s CPU time, 152.5M memory peak, 0B memory swap peak. Apr 17 23:35:26.291386 systemd-logind[1998]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:35:26.295506 systemd-logind[1998]: Removed session 7. Apr 17 23:35:39.564420 kubelet[3435]: I0417 23:35:39.564324 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-g58mf" podStartSLOduration=23.097917095 podStartE2EDuration="30.564302708s" podCreationTimestamp="2026-04-17 23:35:09 +0000 UTC" firstStartedPulling="2026-04-17 23:35:10.007841713 +0000 UTC m=+6.574381150" lastFinishedPulling="2026-04-17 23:35:17.474227326 +0000 UTC m=+14.040766763" observedRunningTime="2026-04-17 23:35:17.953824825 +0000 UTC m=+14.520364286" watchObservedRunningTime="2026-04-17 23:35:39.564302708 +0000 UTC m=+36.130842169" Apr 17 23:35:39.588688 systemd[1]: Created slice kubepods-besteffort-pod4f9bef4d_8936_4eaa_936a_63ec2652c469.slice - libcontainer container kubepods-besteffort-pod4f9bef4d_8936_4eaa_936a_63ec2652c469.slice. Apr 17 23:35:39.637109 kubelet[3435]: I0417 23:35:39.636921 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f9bef4d-8936-4eaa-936a-63ec2652c469-tigera-ca-bundle\") pod \"calico-typha-8865459b-pb4tn\" (UID: \"4f9bef4d-8936-4eaa-936a-63ec2652c469\") " pod="calico-system/calico-typha-8865459b-pb4tn" Apr 17 23:35:39.637109 kubelet[3435]: I0417 23:35:39.636999 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxm6\" (UniqueName: \"kubernetes.io/projected/4f9bef4d-8936-4eaa-936a-63ec2652c469-kube-api-access-5fxm6\") pod \"calico-typha-8865459b-pb4tn\" (UID: \"4f9bef4d-8936-4eaa-936a-63ec2652c469\") " pod="calico-system/calico-typha-8865459b-pb4tn" Apr 17 23:35:39.637109 kubelet[3435]: I0417 23:35:39.637038 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4f9bef4d-8936-4eaa-936a-63ec2652c469-typha-certs\") pod \"calico-typha-8865459b-pb4tn\" (UID: \"4f9bef4d-8936-4eaa-936a-63ec2652c469\") " pod="calico-system/calico-typha-8865459b-pb4tn" Apr 17 23:35:39.778895 systemd[1]: Created slice kubepods-besteffort-pode2d5c810_dd18_4f03_b4c2_a1ed5ac323cc.slice - libcontainer container kubepods-besteffort-pode2d5c810_dd18_4f03_b4c2_a1ed5ac323cc.slice. Apr 17 23:35:39.839392 kubelet[3435]: I0417 23:35:39.839237 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-var-run-calico\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.839769 kubelet[3435]: I0417 23:35:39.839654 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-bpffs\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.840116 kubelet[3435]: I0417 23:35:39.839913 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-cni-net-dir\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.840553 kubelet[3435]: I0417 23:35:39.840327 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-cni-log-dir\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.840553 kubelet[3435]: I0417 23:35:39.840469 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-flexvol-driver-host\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.840907 kubelet[3435]: I0417 23:35:39.840649 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-node-certs\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.840907 kubelet[3435]: I0417 23:35:39.840753 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-nodeproc\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.841445 kubelet[3435]: I0417 23:35:39.841109 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-policysync\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.841445 kubelet[3435]: I0417 23:35:39.841268 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-var-lib-calico\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.842713 kubelet[3435]: I0417 23:35:39.841640 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-xtables-lock\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.843120 kubelet[3435]: I0417 23:35:39.842920 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-sys-fs\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.843120 kubelet[3435]: I0417 23:35:39.843034 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-cni-bin-dir\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.843408 kubelet[3435]: I0417 23:35:39.843198 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-lib-modules\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.843819 kubelet[3435]: I0417 23:35:39.843538 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-tigera-ca-bundle\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.843819 kubelet[3435]: I0417 23:35:39.843658 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkz75\" (UniqueName: \"kubernetes.io/projected/e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc-kube-api-access-pkz75\") pod \"calico-node-hfhp8\" (UID: \"e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc\") " pod="calico-system/calico-node-hfhp8" Apr 17 23:35:39.904877 containerd[2021]: time="2026-04-17T23:35:39.904422118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8865459b-pb4tn,Uid:4f9bef4d-8936-4eaa-936a-63ec2652c469,Namespace:calico-system,Attempt:0,}" Apr 17 23:35:39.938120 kubelet[3435]: E0417 23:35:39.937403 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:39.967086 containerd[2021]: time="2026-04-17T23:35:39.965122366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:39.967086 containerd[2021]: time="2026-04-17T23:35:39.965234758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:39.967086 containerd[2021]: time="2026-04-17T23:35:39.965274754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:39.967086 containerd[2021]: time="2026-04-17T23:35:39.965434198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:40.012682 kubelet[3435]: E0417 23:35:40.011480 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.012682 kubelet[3435]: W0417 23:35:40.011524 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.013948 kubelet[3435]: E0417 23:35:40.013609 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.023562 kubelet[3435]: E0417 23:35:40.022066 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.024139 kubelet[3435]: W0417 23:35:40.023864 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.024139 kubelet[3435]: E0417 23:35:40.023936 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.037818 kubelet[3435]: E0417 23:35:40.036932 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.037818 kubelet[3435]: W0417 23:35:40.036971 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.037818 kubelet[3435]: E0417 23:35:40.037023 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.042152 systemd[1]: Started cri-containerd-b4b7a6d076c27beed59eebace2c3e92c8cc292f6627faef867e44bb3d006313e.scope - libcontainer container b4b7a6d076c27beed59eebace2c3e92c8cc292f6627faef867e44bb3d006313e. Apr 17 23:35:40.051446 kubelet[3435]: E0417 23:35:40.051406 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.052216 kubelet[3435]: W0417 23:35:40.051951 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.052216 kubelet[3435]: E0417 23:35:40.052002 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.053624 kubelet[3435]: E0417 23:35:40.052920 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.053925 kubelet[3435]: W0417 23:35:40.053885 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.054285 kubelet[3435]: E0417 23:35:40.054027 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.054655 kubelet[3435]: E0417 23:35:40.054627 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.055734 kubelet[3435]: W0417 23:35:40.055018 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.055734 kubelet[3435]: E0417 23:35:40.055068 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.056368 kubelet[3435]: E0417 23:35:40.056332 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.056904 kubelet[3435]: W0417 23:35:40.056620 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.056904 kubelet[3435]: E0417 23:35:40.056688 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.057903 kubelet[3435]: E0417 23:35:40.057725 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.057903 kubelet[3435]: W0417 23:35:40.057760 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.058370 kubelet[3435]: E0417 23:35:40.058198 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.059887 kubelet[3435]: E0417 23:35:40.059847 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.060459 kubelet[3435]: W0417 23:35:40.060081 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.060459 kubelet[3435]: E0417 23:35:40.060169 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.061489 kubelet[3435]: E0417 23:35:40.061332 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.061489 kubelet[3435]: W0417 23:35:40.061366 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.061489 kubelet[3435]: E0417 23:35:40.061399 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.063420 kubelet[3435]: E0417 23:35:40.062902 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.063420 kubelet[3435]: W0417 23:35:40.062954 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.063420 kubelet[3435]: E0417 23:35:40.063005 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.064038 kubelet[3435]: E0417 23:35:40.063894 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.064750 kubelet[3435]: W0417 23:35:40.064529 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.064750 kubelet[3435]: E0417 23:35:40.064585 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.066631 kubelet[3435]: E0417 23:35:40.066250 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.066631 kubelet[3435]: W0417 23:35:40.066285 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.066631 kubelet[3435]: E0417 23:35:40.066400 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.068531 kubelet[3435]: E0417 23:35:40.067950 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.068531 kubelet[3435]: W0417 23:35:40.067985 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.068531 kubelet[3435]: E0417 23:35:40.068020 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.070759 kubelet[3435]: E0417 23:35:40.070549 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.070759 kubelet[3435]: W0417 23:35:40.070610 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.070759 kubelet[3435]: E0417 23:35:40.070643 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.073275 kubelet[3435]: E0417 23:35:40.073030 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.073275 kubelet[3435]: W0417 23:35:40.073064 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.073275 kubelet[3435]: E0417 23:35:40.073096 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.074165 kubelet[3435]: E0417 23:35:40.073818 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.074165 kubelet[3435]: W0417 23:35:40.073849 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.074165 kubelet[3435]: E0417 23:35:40.073879 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.076615 kubelet[3435]: E0417 23:35:40.075538 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.076615 kubelet[3435]: W0417 23:35:40.075572 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.076615 kubelet[3435]: E0417 23:35:40.075612 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.078522 kubelet[3435]: E0417 23:35:40.077836 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.078522 kubelet[3435]: W0417 23:35:40.077879 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.078522 kubelet[3435]: E0417 23:35:40.077940 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.079998 kubelet[3435]: E0417 23:35:40.079950 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.079998 kubelet[3435]: W0417 23:35:40.079990 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.080180 kubelet[3435]: E0417 23:35:40.080025 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.084009 kubelet[3435]: E0417 23:35:40.083945 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.084009 kubelet[3435]: W0417 23:35:40.083992 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.084270 kubelet[3435]: E0417 23:35:40.084026 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.087993 kubelet[3435]: E0417 23:35:40.087929 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.087993 kubelet[3435]: W0417 23:35:40.087976 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.088198 kubelet[3435]: E0417 23:35:40.088010 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.088591 kubelet[3435]: E0417 23:35:40.088546 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.088591 kubelet[3435]: W0417 23:35:40.088579 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.088746 kubelet[3435]: E0417 23:35:40.088606 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.090825 kubelet[3435]: E0417 23:35:40.090671 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.090825 kubelet[3435]: W0417 23:35:40.090706 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.090825 kubelet[3435]: E0417 23:35:40.090739 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.093993 kubelet[3435]: E0417 23:35:40.093934 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.093993 kubelet[3435]: W0417 23:35:40.093976 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.094200 kubelet[3435]: E0417 23:35:40.094011 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.096999 kubelet[3435]: E0417 23:35:40.096937 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.096999 kubelet[3435]: W0417 23:35:40.096981 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.097212 kubelet[3435]: E0417 23:35:40.097016 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.102887 kubelet[3435]: E0417 23:35:40.102827 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.102887 kubelet[3435]: W0417 23:35:40.102869 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.103139 kubelet[3435]: E0417 23:35:40.102903 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.103981 kubelet[3435]: E0417 23:35:40.103926 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.103981 kubelet[3435]: W0417 23:35:40.103966 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.104162 kubelet[3435]: E0417 23:35:40.104003 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.105876 kubelet[3435]: E0417 23:35:40.105601 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.105876 kubelet[3435]: W0417 23:35:40.105867 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.105876 kubelet[3435]: E0417 23:35:40.105902 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.107002 kubelet[3435]: E0417 23:35:40.106953 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.107002 kubelet[3435]: W0417 23:35:40.106992 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.108228 kubelet[3435]: E0417 23:35:40.107030 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.109780 kubelet[3435]: E0417 23:35:40.109693 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.109780 kubelet[3435]: W0417 23:35:40.109729 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.110365 kubelet[3435]: E0417 23:35:40.109956 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.110365 kubelet[3435]: I0417 23:35:40.110242 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ba48e602-86c6-47be-a12c-378408003d1d-socket-dir\") pod \"csi-node-driver-gq6hp\" (UID: \"ba48e602-86c6-47be-a12c-378408003d1d\") " pod="calico-system/csi-node-driver-gq6hp" Apr 17 23:35:40.112255 kubelet[3435]: E0417 23:35:40.112020 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.112255 kubelet[3435]: W0417 23:35:40.112063 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.112255 kubelet[3435]: E0417 23:35:40.112098 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.114079 containerd[2021]: time="2026-04-17T23:35:40.113252263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hfhp8,Uid:e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc,Namespace:calico-system,Attempt:0,}" Apr 17 23:35:40.114673 kubelet[3435]: E0417 23:35:40.114502 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.114673 kubelet[3435]: W0417 23:35:40.114544 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.114673 kubelet[3435]: E0417 23:35:40.114579 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.116987 kubelet[3435]: E0417 23:35:40.116931 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.116987 kubelet[3435]: W0417 23:35:40.116972 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.117198 kubelet[3435]: E0417 23:35:40.117007 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.117198 kubelet[3435]: I0417 23:35:40.117081 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxbm\" (UniqueName: \"kubernetes.io/projected/ba48e602-86c6-47be-a12c-378408003d1d-kube-api-access-bmxbm\") pod \"csi-node-driver-gq6hp\" (UID: \"ba48e602-86c6-47be-a12c-378408003d1d\") " pod="calico-system/csi-node-driver-gq6hp" Apr 17 23:35:40.119088 kubelet[3435]: E0417 23:35:40.118944 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.119088 kubelet[3435]: W0417 23:35:40.118989 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.119088 kubelet[3435]: E0417 23:35:40.119025 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.120990 kubelet[3435]: E0417 23:35:40.120726 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.120990 kubelet[3435]: W0417 23:35:40.120767 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.122124 kubelet[3435]: E0417 23:35:40.121879 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.123969 kubelet[3435]: E0417 23:35:40.123917 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.123969 kubelet[3435]: W0417 23:35:40.123959 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.124163 kubelet[3435]: E0417 23:35:40.123993 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.124163 kubelet[3435]: I0417 23:35:40.124062 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba48e602-86c6-47be-a12c-378408003d1d-kubelet-dir\") pod \"csi-node-driver-gq6hp\" (UID: \"ba48e602-86c6-47be-a12c-378408003d1d\") " pod="calico-system/csi-node-driver-gq6hp" Apr 17 23:35:40.126107 kubelet[3435]: E0417 23:35:40.125761 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.126107 kubelet[3435]: W0417 23:35:40.125831 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.126107 kubelet[3435]: E0417 23:35:40.125867 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.130858 kubelet[3435]: E0417 23:35:40.130056 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.130858 kubelet[3435]: W0417 23:35:40.130096 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.130858 kubelet[3435]: E0417 23:35:40.130130 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.133664 kubelet[3435]: E0417 23:35:40.133603 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.133810 kubelet[3435]: W0417 23:35:40.133751 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.133998 kubelet[3435]: E0417 23:35:40.133920 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.135284 kubelet[3435]: I0417 23:35:40.135224 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ba48e602-86c6-47be-a12c-378408003d1d-varrun\") pod \"csi-node-driver-gq6hp\" (UID: \"ba48e602-86c6-47be-a12c-378408003d1d\") " pod="calico-system/csi-node-driver-gq6hp" Apr 17 23:35:40.136845 kubelet[3435]: E0417 23:35:40.136599 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.136845 kubelet[3435]: W0417 23:35:40.136648 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.136845 kubelet[3435]: E0417 23:35:40.136682 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.138566 kubelet[3435]: E0417 23:35:40.138286 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.138566 kubelet[3435]: W0417 23:35:40.138328 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.138566 kubelet[3435]: E0417 23:35:40.138388 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.139992 kubelet[3435]: E0417 23:35:40.139543 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.139992 kubelet[3435]: W0417 23:35:40.139634 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.139992 kubelet[3435]: E0417 23:35:40.139668 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.140667 kubelet[3435]: I0417 23:35:40.139887 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ba48e602-86c6-47be-a12c-378408003d1d-registration-dir\") pod \"csi-node-driver-gq6hp\" (UID: \"ba48e602-86c6-47be-a12c-378408003d1d\") " pod="calico-system/csi-node-driver-gq6hp" Apr 17 23:35:40.142958 kubelet[3435]: E0417 23:35:40.141633 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.142958 kubelet[3435]: W0417 23:35:40.141671 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.142958 kubelet[3435]: E0417 23:35:40.141705 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.144118 kubelet[3435]: E0417 23:35:40.143998 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.144118 kubelet[3435]: W0417 23:35:40.144033 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.144118 kubelet[3435]: E0417 23:35:40.144066 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.199142 containerd[2021]: time="2026-04-17T23:35:40.198994387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:35:40.199142 containerd[2021]: time="2026-04-17T23:35:40.199085887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:35:40.201112 containerd[2021]: time="2026-04-17T23:35:40.199112851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:40.201112 containerd[2021]: time="2026-04-17T23:35:40.199358491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:35:40.239109 containerd[2021]: time="2026-04-17T23:35:40.239012875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8865459b-pb4tn,Uid:4f9bef4d-8936-4eaa-936a-63ec2652c469,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4b7a6d076c27beed59eebace2c3e92c8cc292f6627faef867e44bb3d006313e\"" Apr 17 23:35:40.243051 kubelet[3435]: E0417 23:35:40.241809 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.243051 kubelet[3435]: W0417 23:35:40.241972 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.242137 systemd[1]: Started cri-containerd-e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5.scope - libcontainer container e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5. Apr 17 23:35:40.244841 kubelet[3435]: E0417 23:35:40.242007 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.246378 kubelet[3435]: E0417 23:35:40.246311 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.246570 kubelet[3435]: W0417 23:35:40.246462 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.246570 kubelet[3435]: E0417 23:35:40.246499 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.247349 containerd[2021]: time="2026-04-17T23:35:40.246862831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:35:40.247677 kubelet[3435]: E0417 23:35:40.247384 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.248369 kubelet[3435]: W0417 23:35:40.248298 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.248851 kubelet[3435]: E0417 23:35:40.248370 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.249808 kubelet[3435]: E0417 23:35:40.249231 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.249808 kubelet[3435]: W0417 23:35:40.249256 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.249808 kubelet[3435]: E0417 23:35:40.249392 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.250159 kubelet[3435]: E0417 23:35:40.250132 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.250215 kubelet[3435]: W0417 23:35:40.250154 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.250215 kubelet[3435]: E0417 23:35:40.250202 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.251095 kubelet[3435]: E0417 23:35:40.250698 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.251095 kubelet[3435]: W0417 23:35:40.250729 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.252339 kubelet[3435]: E0417 23:35:40.251560 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.252339 kubelet[3435]: E0417 23:35:40.252277 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.252339 kubelet[3435]: W0417 23:35:40.252302 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.252501 kubelet[3435]: E0417 23:35:40.252348 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.253487 kubelet[3435]: E0417 23:35:40.252775 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.253487 kubelet[3435]: W0417 23:35:40.252876 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.253487 kubelet[3435]: E0417 23:35:40.252903 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.253487 kubelet[3435]: E0417 23:35:40.253456 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.253487 kubelet[3435]: W0417 23:35:40.253477 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.255422 kubelet[3435]: E0417 23:35:40.253514 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.255422 kubelet[3435]: E0417 23:35:40.253899 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.255422 kubelet[3435]: W0417 23:35:40.253918 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.255422 kubelet[3435]: E0417 23:35:40.253945 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.255422 kubelet[3435]: E0417 23:35:40.254460 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.255422 kubelet[3435]: W0417 23:35:40.254481 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.255422 kubelet[3435]: E0417 23:35:40.254506 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.255422 kubelet[3435]: E0417 23:35:40.255413 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.255873 kubelet[3435]: W0417 23:35:40.255439 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.255873 kubelet[3435]: E0417 23:35:40.255467 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.258112 kubelet[3435]: E0417 23:35:40.257106 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.258112 kubelet[3435]: W0417 23:35:40.257145 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.258112 kubelet[3435]: E0417 23:35:40.257180 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.258762 kubelet[3435]: E0417 23:35:40.258451 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.258762 kubelet[3435]: W0417 23:35:40.258479 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.258762 kubelet[3435]: E0417 23:35:40.258508 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.260349 kubelet[3435]: E0417 23:35:40.259412 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.260349 kubelet[3435]: W0417 23:35:40.259436 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.260349 kubelet[3435]: E0417 23:35:40.259465 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.260349 kubelet[3435]: E0417 23:35:40.259979 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.260349 kubelet[3435]: W0417 23:35:40.260000 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.260349 kubelet[3435]: E0417 23:35:40.260025 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.260695 kubelet[3435]: E0417 23:35:40.260603 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.260695 kubelet[3435]: W0417 23:35:40.260625 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.260695 kubelet[3435]: E0417 23:35:40.260649 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.262775 kubelet[3435]: E0417 23:35:40.261053 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.262775 kubelet[3435]: W0417 23:35:40.261083 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.262775 kubelet[3435]: E0417 23:35:40.261108 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.262775 kubelet[3435]: E0417 23:35:40.261576 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.262775 kubelet[3435]: W0417 23:35:40.261597 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.262775 kubelet[3435]: E0417 23:35:40.261674 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.262775 kubelet[3435]: E0417 23:35:40.262410 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.262775 kubelet[3435]: W0417 23:35:40.262433 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.262775 kubelet[3435]: E0417 23:35:40.262458 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.265214 kubelet[3435]: E0417 23:35:40.265099 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.265214 kubelet[3435]: W0417 23:35:40.265141 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.265214 kubelet[3435]: E0417 23:35:40.265175 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.267471 kubelet[3435]: E0417 23:35:40.267093 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.267471 kubelet[3435]: W0417 23:35:40.267140 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.267471 kubelet[3435]: E0417 23:35:40.267174 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.268665 kubelet[3435]: E0417 23:35:40.268616 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.268997 kubelet[3435]: W0417 23:35:40.268654 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.268997 kubelet[3435]: E0417 23:35:40.268705 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.269747 kubelet[3435]: E0417 23:35:40.269565 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.269747 kubelet[3435]: W0417 23:35:40.269602 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.269747 kubelet[3435]: E0417 23:35:40.269637 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.273261 kubelet[3435]: E0417 23:35:40.273209 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.273261 kubelet[3435]: W0417 23:35:40.273249 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.273261 kubelet[3435]: E0417 23:35:40.273287 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.287116 kubelet[3435]: E0417 23:35:40.287068 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:40.287116 kubelet[3435]: W0417 23:35:40.287105 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:40.287443 kubelet[3435]: E0417 23:35:40.287135 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:40.312937 containerd[2021]: time="2026-04-17T23:35:40.312719804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hfhp8,Uid:e2d5c810-dd18-4f03-b4c2-a1ed5ac323cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\"" Apr 17 23:35:41.700991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370651822.mount: Deactivated successfully. Apr 17 23:35:41.834931 kubelet[3435]: E0417 23:35:41.833448 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:42.648524 containerd[2021]: time="2026-04-17T23:35:42.648457667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:42.652436 containerd[2021]: time="2026-04-17T23:35:42.652370471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 17 23:35:42.655117 containerd[2021]: time="2026-04-17T23:35:42.655033859Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:42.659772 containerd[2021]: time="2026-04-17T23:35:42.659692667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:42.662304 containerd[2021]: time="2026-04-17T23:35:42.661214087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.41417062s" Apr 17 23:35:42.662304 containerd[2021]: time="2026-04-17T23:35:42.661274663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 17 23:35:42.663949 containerd[2021]: time="2026-04-17T23:35:42.663591851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:35:42.697358 containerd[2021]: time="2026-04-17T23:35:42.697281252Z" level=info msg="CreateContainer within sandbox \"b4b7a6d076c27beed59eebace2c3e92c8cc292f6627faef867e44bb3d006313e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:35:42.730589 containerd[2021]: time="2026-04-17T23:35:42.730412208Z" level=info msg="CreateContainer within sandbox \"b4b7a6d076c27beed59eebace2c3e92c8cc292f6627faef867e44bb3d006313e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a6d2d000bef889797889b5341713c159246699d90593621bd6d0fe3210fadc3a\"" Apr 17 23:35:42.732682 containerd[2021]: time="2026-04-17T23:35:42.732594420Z" level=info msg="StartContainer for \"a6d2d000bef889797889b5341713c159246699d90593621bd6d0fe3210fadc3a\"" Apr 17 23:35:42.790102 systemd[1]: Started cri-containerd-a6d2d000bef889797889b5341713c159246699d90593621bd6d0fe3210fadc3a.scope - libcontainer container a6d2d000bef889797889b5341713c159246699d90593621bd6d0fe3210fadc3a. Apr 17 23:35:42.861475 containerd[2021]: time="2026-04-17T23:35:42.861401136Z" level=info msg="StartContainer for \"a6d2d000bef889797889b5341713c159246699d90593621bd6d0fe3210fadc3a\" returns successfully" Apr 17 23:35:43.128297 kubelet[3435]: E0417 23:35:43.127897 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.128297 kubelet[3435]: W0417 23:35:43.127931 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.128297 kubelet[3435]: E0417 23:35:43.127963 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.131241 kubelet[3435]: E0417 23:35:43.130955 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.131241 kubelet[3435]: W0417 23:35:43.130991 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.131241 kubelet[3435]: E0417 23:35:43.131022 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.131627 kubelet[3435]: E0417 23:35:43.131602 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.131734 kubelet[3435]: W0417 23:35:43.131710 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.131875 kubelet[3435]: E0417 23:35:43.131852 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.132670 kubelet[3435]: E0417 23:35:43.132422 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.132670 kubelet[3435]: W0417 23:35:43.132452 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.132670 kubelet[3435]: E0417 23:35:43.132478 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.133220 kubelet[3435]: E0417 23:35:43.133190 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.133348 kubelet[3435]: W0417 23:35:43.133323 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.133458 kubelet[3435]: E0417 23:35:43.133435 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.135069 kubelet[3435]: E0417 23:35:43.134333 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.135411 kubelet[3435]: W0417 23:35:43.135255 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.135411 kubelet[3435]: E0417 23:35:43.135304 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.136476 kubelet[3435]: E0417 23:35:43.136176 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.136476 kubelet[3435]: W0417 23:35:43.136209 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.136476 kubelet[3435]: E0417 23:35:43.136260 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.138654 kubelet[3435]: E0417 23:35:43.138170 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.138654 kubelet[3435]: W0417 23:35:43.138205 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.138654 kubelet[3435]: E0417 23:35:43.138237 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.140861 kubelet[3435]: E0417 23:35:43.140664 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.140861 kubelet[3435]: W0417 23:35:43.140700 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.140861 kubelet[3435]: E0417 23:35:43.140733 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.143306 kubelet[3435]: E0417 23:35:43.143069 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.143306 kubelet[3435]: W0417 23:35:43.143102 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.143306 kubelet[3435]: E0417 23:35:43.143134 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.146286 kubelet[3435]: E0417 23:35:43.146028 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.146286 kubelet[3435]: W0417 23:35:43.146063 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.146286 kubelet[3435]: E0417 23:35:43.146095 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.147686 kubelet[3435]: E0417 23:35:43.147647 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.148052 kubelet[3435]: W0417 23:35:43.147897 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.148052 kubelet[3435]: E0417 23:35:43.147942 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.149823 kubelet[3435]: E0417 23:35:43.149456 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.150460 kubelet[3435]: W0417 23:35:43.149985 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.150460 kubelet[3435]: E0417 23:35:43.150033 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.151567 kubelet[3435]: E0417 23:35:43.151113 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.151567 kubelet[3435]: W0417 23:35:43.151141 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.151567 kubelet[3435]: E0417 23:35:43.151169 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.153338 kubelet[3435]: E0417 23:35:43.152900 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.153338 kubelet[3435]: W0417 23:35:43.152934 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.153338 kubelet[3435]: E0417 23:35:43.152964 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.183564 kubelet[3435]: E0417 23:35:43.183375 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.183564 kubelet[3435]: W0417 23:35:43.183413 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.183564 kubelet[3435]: E0417 23:35:43.183445 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.185533 kubelet[3435]: E0417 23:35:43.185234 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.185533 kubelet[3435]: W0417 23:35:43.185269 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.185533 kubelet[3435]: E0417 23:35:43.185300 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.187498 kubelet[3435]: E0417 23:35:43.187280 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.187498 kubelet[3435]: W0417 23:35:43.187315 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.187498 kubelet[3435]: E0417 23:35:43.187347 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.188438 kubelet[3435]: E0417 23:35:43.188239 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.188438 kubelet[3435]: W0417 23:35:43.188286 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.188438 kubelet[3435]: E0417 23:35:43.188317 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.191878 kubelet[3435]: E0417 23:35:43.190986 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.191878 kubelet[3435]: W0417 23:35:43.191030 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.191878 kubelet[3435]: E0417 23:35:43.191062 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.192721 kubelet[3435]: E0417 23:35:43.192509 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.192721 kubelet[3435]: W0417 23:35:43.192542 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.192721 kubelet[3435]: E0417 23:35:43.192575 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.194190 kubelet[3435]: E0417 23:35:43.194019 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.194190 kubelet[3435]: W0417 23:35:43.194049 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.194190 kubelet[3435]: E0417 23:35:43.194080 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.195333 kubelet[3435]: E0417 23:35:43.195065 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.195333 kubelet[3435]: W0417 23:35:43.195097 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.195333 kubelet[3435]: E0417 23:35:43.195127 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.197777 kubelet[3435]: E0417 23:35:43.197502 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.197777 kubelet[3435]: W0417 23:35:43.197537 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.197777 kubelet[3435]: E0417 23:35:43.197573 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.198748 kubelet[3435]: E0417 23:35:43.198466 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.198748 kubelet[3435]: W0417 23:35:43.198499 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.198748 kubelet[3435]: E0417 23:35:43.198529 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.199657 kubelet[3435]: E0417 23:35:43.199381 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.199657 kubelet[3435]: W0417 23:35:43.199412 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.199657 kubelet[3435]: E0417 23:35:43.199442 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.201015 kubelet[3435]: E0417 23:35:43.200586 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.201015 kubelet[3435]: W0417 23:35:43.200613 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.201015 kubelet[3435]: E0417 23:35:43.200644 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.202079 kubelet[3435]: E0417 23:35:43.201682 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.202079 kubelet[3435]: W0417 23:35:43.201716 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.202079 kubelet[3435]: E0417 23:35:43.201747 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.204834 kubelet[3435]: E0417 23:35:43.204564 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.204834 kubelet[3435]: W0417 23:35:43.204599 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.204834 kubelet[3435]: E0417 23:35:43.204632 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.205387 kubelet[3435]: E0417 23:35:43.205360 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.205924 kubelet[3435]: W0417 23:35:43.205614 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.205924 kubelet[3435]: E0417 23:35:43.205654 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.207098 kubelet[3435]: E0417 23:35:43.206883 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.207319 kubelet[3435]: W0417 23:35:43.207271 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.207564 kubelet[3435]: E0417 23:35:43.207536 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.210560 kubelet[3435]: E0417 23:35:43.210522 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.210882 kubelet[3435]: W0417 23:35:43.210851 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.211009 kubelet[3435]: E0417 23:35:43.210985 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.211739 kubelet[3435]: E0417 23:35:43.211707 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:43.211975 kubelet[3435]: W0417 23:35:43.211885 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:43.211975 kubelet[3435]: E0417 23:35:43.211921 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:43.828591 kubelet[3435]: E0417 23:35:43.828243 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:44.146180 kubelet[3435]: I0417 23:35:44.145287 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8865459b-pb4tn" podStartSLOduration=2.725782331 podStartE2EDuration="5.145263815s" podCreationTimestamp="2026-04-17 23:35:39 +0000 UTC" firstStartedPulling="2026-04-17 23:35:40.243611071 +0000 UTC m=+36.810150508" lastFinishedPulling="2026-04-17 23:35:42.663092543 +0000 UTC m=+39.229631992" observedRunningTime="2026-04-17 23:35:43.153857434 +0000 UTC m=+39.720397027" watchObservedRunningTime="2026-04-17 23:35:44.145263815 +0000 UTC m=+40.711803252" Apr 17 23:35:44.162818 kubelet[3435]: E0417 23:35:44.162040 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.162818 kubelet[3435]: W0417 23:35:44.162284 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.162818 kubelet[3435]: E0417 23:35:44.162326 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.165536 kubelet[3435]: E0417 23:35:44.165441 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.165536 kubelet[3435]: W0417 23:35:44.165501 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.165536 kubelet[3435]: E0417 23:35:44.165537 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.167009 kubelet[3435]: E0417 23:35:44.166959 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.167009 kubelet[3435]: W0417 23:35:44.166998 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.167205 kubelet[3435]: E0417 23:35:44.167032 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.170178 kubelet[3435]: E0417 23:35:44.169829 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.170178 kubelet[3435]: W0417 23:35:44.169870 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.170178 kubelet[3435]: E0417 23:35:44.170006 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.171432 kubelet[3435]: E0417 23:35:44.171070 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.171432 kubelet[3435]: W0417 23:35:44.171099 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.171432 kubelet[3435]: E0417 23:35:44.171247 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.172247 kubelet[3435]: E0417 23:35:44.172052 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.172247 kubelet[3435]: W0417 23:35:44.172088 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.172247 kubelet[3435]: E0417 23:35:44.172174 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.174240 kubelet[3435]: E0417 23:35:44.173048 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.174240 kubelet[3435]: W0417 23:35:44.173077 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.174240 kubelet[3435]: E0417 23:35:44.173109 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.174240 kubelet[3435]: E0417 23:35:44.173861 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.174240 kubelet[3435]: W0417 23:35:44.173885 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.174240 kubelet[3435]: E0417 23:35:44.173912 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.175777 kubelet[3435]: E0417 23:35:44.175086 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.175777 kubelet[3435]: W0417 23:35:44.175125 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.175777 kubelet[3435]: E0417 23:35:44.175156 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.175777 kubelet[3435]: E0417 23:35:44.175715 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.175777 kubelet[3435]: W0417 23:35:44.175735 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.175777 kubelet[3435]: E0417 23:35:44.175756 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.177768 kubelet[3435]: E0417 23:35:44.176376 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.177768 kubelet[3435]: W0417 23:35:44.176411 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.177768 kubelet[3435]: E0417 23:35:44.176532 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.177768 kubelet[3435]: E0417 23:35:44.177481 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.177768 kubelet[3435]: W0417 23:35:44.177505 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.177768 kubelet[3435]: E0417 23:35:44.177535 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.179143 kubelet[3435]: E0417 23:35:44.178263 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.179143 kubelet[3435]: W0417 23:35:44.178301 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.179143 kubelet[3435]: E0417 23:35:44.178330 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.179143 kubelet[3435]: E0417 23:35:44.179095 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.179143 kubelet[3435]: W0417 23:35:44.179120 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.179143 kubelet[3435]: E0417 23:35:44.179147 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.180057 kubelet[3435]: E0417 23:35:44.179535 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.180057 kubelet[3435]: W0417 23:35:44.179568 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.180057 kubelet[3435]: E0417 23:35:44.179593 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.195271 kubelet[3435]: E0417 23:35:44.195201 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.195271 kubelet[3435]: W0417 23:35:44.195240 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.195471 kubelet[3435]: E0417 23:35:44.195299 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.197182 kubelet[3435]: E0417 23:35:44.196757 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.197182 kubelet[3435]: W0417 23:35:44.196950 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.197182 kubelet[3435]: E0417 23:35:44.196985 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.199360 kubelet[3435]: E0417 23:35:44.197977 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.199360 kubelet[3435]: W0417 23:35:44.198129 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.199360 kubelet[3435]: E0417 23:35:44.198161 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.200078 kubelet[3435]: E0417 23:35:44.199965 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.200078 kubelet[3435]: W0417 23:35:44.200066 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.201864 kubelet[3435]: E0417 23:35:44.200506 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.201864 kubelet[3435]: E0417 23:35:44.201419 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.201864 kubelet[3435]: W0417 23:35:44.201445 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.201864 kubelet[3435]: E0417 23:35:44.201606 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.206496 kubelet[3435]: E0417 23:35:44.202712 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.206496 kubelet[3435]: W0417 23:35:44.202752 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.206496 kubelet[3435]: E0417 23:35:44.202905 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.206496 kubelet[3435]: E0417 23:35:44.204376 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.206496 kubelet[3435]: W0417 23:35:44.204400 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.206856 kubelet[3435]: E0417 23:35:44.206549 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.208096 kubelet[3435]: E0417 23:35:44.207848 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.208096 kubelet[3435]: W0417 23:35:44.207908 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.208096 kubelet[3435]: E0417 23:35:44.207939 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.209286 kubelet[3435]: E0417 23:35:44.209016 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.209286 kubelet[3435]: W0417 23:35:44.209272 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.209471 kubelet[3435]: E0417 23:35:44.209305 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.210462 kubelet[3435]: E0417 23:35:44.210410 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.210462 kubelet[3435]: W0417 23:35:44.210449 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.210640 kubelet[3435]: E0417 23:35:44.210527 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.213378 kubelet[3435]: E0417 23:35:44.213308 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.213378 kubelet[3435]: W0417 23:35:44.213366 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.213811 kubelet[3435]: E0417 23:35:44.213400 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.215433 kubelet[3435]: E0417 23:35:44.215189 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.217510 kubelet[3435]: W0417 23:35:44.217278 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.217510 kubelet[3435]: E0417 23:35:44.217336 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.222269 kubelet[3435]: E0417 23:35:44.222000 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.222269 kubelet[3435]: W0417 23:35:44.222047 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.222269 kubelet[3435]: E0417 23:35:44.222079 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.223409 kubelet[3435]: E0417 23:35:44.223151 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.228543 kubelet[3435]: W0417 23:35:44.228484 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.228880 kubelet[3435]: E0417 23:35:44.228839 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.230162 kubelet[3435]: E0417 23:35:44.230110 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.230162 kubelet[3435]: W0417 23:35:44.230151 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.230395 kubelet[3435]: E0417 23:35:44.230333 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.231589 kubelet[3435]: E0417 23:35:44.231534 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.231589 kubelet[3435]: W0417 23:35:44.231575 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.232315 kubelet[3435]: E0417 23:35:44.232267 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.237226 kubelet[3435]: E0417 23:35:44.237164 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.237226 kubelet[3435]: W0417 23:35:44.237224 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.237436 kubelet[3435]: E0417 23:35:44.237262 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.239392 kubelet[3435]: E0417 23:35:44.239329 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:35:44.239392 kubelet[3435]: W0417 23:35:44.239376 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:35:44.239564 kubelet[3435]: E0417 23:35:44.239423 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:35:44.276966 containerd[2021]: time="2026-04-17T23:35:44.276902843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:44.279374 containerd[2021]: time="2026-04-17T23:35:44.279317027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 17 23:35:44.281490 containerd[2021]: time="2026-04-17T23:35:44.281443739Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:44.288080 containerd[2021]: time="2026-04-17T23:35:44.288014027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:44.289557 containerd[2021]: time="2026-04-17T23:35:44.289492979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.625843396s" Apr 17 23:35:44.289651 containerd[2021]: time="2026-04-17T23:35:44.289553759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 17 23:35:44.297737 containerd[2021]: time="2026-04-17T23:35:44.297665663Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:35:44.322935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277337074.mount: Deactivated successfully. Apr 17 23:35:44.331643 containerd[2021]: time="2026-04-17T23:35:44.331587168Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331\"" Apr 17 23:35:44.333041 containerd[2021]: time="2026-04-17T23:35:44.332991156Z" level=info msg="StartContainer for \"81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331\"" Apr 17 23:35:44.392135 systemd[1]: Started cri-containerd-81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331.scope - libcontainer container 81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331. Apr 17 23:35:44.445064 containerd[2021]: time="2026-04-17T23:35:44.444878220Z" level=info msg="StartContainer for \"81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331\" returns successfully" Apr 17 23:35:44.478825 systemd[1]: cri-containerd-81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331.scope: Deactivated successfully. Apr 17 23:35:44.674168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331-rootfs.mount: Deactivated successfully. Apr 17 23:35:44.677442 containerd[2021]: time="2026-04-17T23:35:44.677065177Z" level=info msg="shim disconnected" id=81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331 namespace=k8s.io Apr 17 23:35:44.677442 containerd[2021]: time="2026-04-17T23:35:44.677169229Z" level=warning msg="cleaning up after shim disconnected" id=81d3dc413d4dd39574b8c9ba1ae242121bdb64ea70a4f0b4d542ecb1cecf3331 namespace=k8s.io Apr 17 23:35:44.677442 containerd[2021]: time="2026-04-17T23:35:44.677194345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:35:45.120982 containerd[2021]: time="2026-04-17T23:35:45.120824268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:35:45.825381 kubelet[3435]: E0417 23:35:45.825312 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:47.827385 kubelet[3435]: E0417 23:35:47.827091 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:49.826118 kubelet[3435]: E0417 23:35:49.826046 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:51.824948 kubelet[3435]: E0417 23:35:51.824877 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:53.029310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006232583.mount: Deactivated successfully. Apr 17 23:35:53.085306 containerd[2021]: time="2026-04-17T23:35:53.084070015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:53.086036 containerd[2021]: time="2026-04-17T23:35:53.085658131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 17 23:35:53.087077 containerd[2021]: time="2026-04-17T23:35:53.086977747Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:53.090988 containerd[2021]: time="2026-04-17T23:35:53.090884167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:53.093855 containerd[2021]: time="2026-04-17T23:35:53.092710999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.971812355s" Apr 17 23:35:53.093855 containerd[2021]: time="2026-04-17T23:35:53.092814607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 17 23:35:53.101529 containerd[2021]: time="2026-04-17T23:35:53.101318527Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:35:53.122293 containerd[2021]: time="2026-04-17T23:35:53.122196199Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1\"" Apr 17 23:35:53.124341 containerd[2021]: time="2026-04-17T23:35:53.123273127Z" level=info msg="StartContainer for \"d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1\"" Apr 17 23:35:53.193691 systemd[1]: run-containerd-runc-k8s.io-d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1-runc.pdYhrz.mount: Deactivated successfully. Apr 17 23:35:53.210176 systemd[1]: Started cri-containerd-d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1.scope - libcontainer container d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1. Apr 17 23:35:53.262699 containerd[2021]: time="2026-04-17T23:35:53.262636280Z" level=info msg="StartContainer for \"d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1\" returns successfully" Apr 17 23:35:53.461061 systemd[1]: cri-containerd-d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1.scope: Deactivated successfully. Apr 17 23:35:53.827247 kubelet[3435]: E0417 23:35:53.826692 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:54.024718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1-rootfs.mount: Deactivated successfully. Apr 17 23:35:54.032908 containerd[2021]: time="2026-04-17T23:35:54.032811116Z" level=info msg="shim disconnected" id=d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1 namespace=k8s.io Apr 17 23:35:54.032908 containerd[2021]: time="2026-04-17T23:35:54.032900216Z" level=warning msg="cleaning up after shim disconnected" id=d19f705fbe2293d7a03c0d39d2c28bf5e6bf28600a641de8cb62289a8dcb71d1 namespace=k8s.io Apr 17 23:35:54.033139 containerd[2021]: time="2026-04-17T23:35:54.032923448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:35:54.157743 containerd[2021]: time="2026-04-17T23:35:54.157618052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:35:55.828036 kubelet[3435]: E0417 23:35:55.826257 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:57.653602 containerd[2021]: time="2026-04-17T23:35:57.653479994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:57.655287 containerd[2021]: time="2026-04-17T23:35:57.655207754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 17 23:35:57.656839 containerd[2021]: time="2026-04-17T23:35:57.656273438Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:57.660879 containerd[2021]: time="2026-04-17T23:35:57.660751910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:35:57.662917 containerd[2021]: time="2026-04-17T23:35:57.662721854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.505018218s" Apr 17 23:35:57.663157 containerd[2021]: time="2026-04-17T23:35:57.663121286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 17 23:35:57.672499 containerd[2021]: time="2026-04-17T23:35:57.672439994Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:35:57.699069 containerd[2021]: time="2026-04-17T23:35:57.698991770Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf\"" Apr 17 23:35:57.701075 containerd[2021]: time="2026-04-17T23:35:57.701018558Z" level=info msg="StartContainer for \"2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf\"" Apr 17 23:35:57.769133 systemd[1]: Started cri-containerd-2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf.scope - libcontainer container 2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf. Apr 17 23:35:57.824974 kubelet[3435]: E0417 23:35:57.824882 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:35:57.828564 containerd[2021]: time="2026-04-17T23:35:57.828401019Z" level=info msg="StartContainer for \"2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf\" returns successfully" Apr 17 23:35:59.620550 containerd[2021]: time="2026-04-17T23:35:59.620445808Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:35:59.630080 kubelet[3435]: I0417 23:35:59.627659 3435 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:35:59.629408 systemd[1]: cri-containerd-2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf.scope: Deactivated successfully. Apr 17 23:35:59.630020 systemd[1]: cri-containerd-2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf.scope: Consumed 1.017s CPU time. Apr 17 23:35:59.703206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf-rootfs.mount: Deactivated successfully. Apr 17 23:35:59.705490 containerd[2021]: time="2026-04-17T23:35:59.705381784Z" level=info msg="shim disconnected" id=2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf namespace=k8s.io Apr 17 23:35:59.705490 containerd[2021]: time="2026-04-17T23:35:59.705483100Z" level=warning msg="cleaning up after shim disconnected" id=2e60536c0b047a8211be20327f81802ee3fa229eebf3a88a1968362a3c7620cf namespace=k8s.io Apr 17 23:35:59.705910 containerd[2021]: time="2026-04-17T23:35:59.705505036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:35:59.797734 systemd[1]: Created slice kubepods-burstable-podc31bfe7e_863f_41d7_b162_ae069a76ee07.slice - libcontainer container kubepods-burstable-podc31bfe7e_863f_41d7_b162_ae069a76ee07.slice. Apr 17 23:35:59.824748 systemd[1]: Created slice kubepods-besteffort-podcb7e7a2a_af32_4537_b52a_928d1a505f9d.slice - libcontainer container kubepods-besteffort-podcb7e7a2a_af32_4537_b52a_928d1a505f9d.slice. Apr 17 23:35:59.829822 kubelet[3435]: I0417 23:35:59.827052 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjs4\" (UniqueName: \"kubernetes.io/projected/6421d27f-ddc6-4d22-86e3-4278c749f598-kube-api-access-bpjs4\") pod \"coredns-674b8bbfcf-mh6rz\" (UID: \"6421d27f-ddc6-4d22-86e3-4278c749f598\") " pod="kube-system/coredns-674b8bbfcf-mh6rz" Apr 17 23:35:59.829822 kubelet[3435]: I0417 23:35:59.827127 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20e438e6-a5f5-45d7-b808-1a4fb95924d1-calico-apiserver-certs\") pod \"calico-apiserver-574894c46f-bcpkg\" (UID: \"20e438e6-a5f5-45d7-b808-1a4fb95924d1\") " pod="calico-system/calico-apiserver-574894c46f-bcpkg" Apr 17 23:35:59.829822 kubelet[3435]: I0417 23:35:59.827175 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8rr\" (UniqueName: \"kubernetes.io/projected/20e438e6-a5f5-45d7-b808-1a4fb95924d1-kube-api-access-fr8rr\") pod \"calico-apiserver-574894c46f-bcpkg\" (UID: \"20e438e6-a5f5-45d7-b808-1a4fb95924d1\") " pod="calico-system/calico-apiserver-574894c46f-bcpkg" Apr 17 23:35:59.829822 kubelet[3435]: I0417 23:35:59.827219 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtqxg\" (UniqueName: \"kubernetes.io/projected/cb7e7a2a-af32-4537-b52a-928d1a505f9d-kube-api-access-xtqxg\") pod \"whisker-7647c56ff9-8x867\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " pod="calico-system/whisker-7647c56ff9-8x867" Apr 17 23:35:59.829822 kubelet[3435]: I0417 23:35:59.827257 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6421d27f-ddc6-4d22-86e3-4278c749f598-config-volume\") pod \"coredns-674b8bbfcf-mh6rz\" (UID: \"6421d27f-ddc6-4d22-86e3-4278c749f598\") " pod="kube-system/coredns-674b8bbfcf-mh6rz" Apr 17 23:35:59.830237 kubelet[3435]: I0417 23:35:59.827296 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-backend-key-pair\") pod \"whisker-7647c56ff9-8x867\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " pod="calico-system/whisker-7647c56ff9-8x867" Apr 17 23:35:59.830237 kubelet[3435]: I0417 23:35:59.827331 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-ca-bundle\") pod \"whisker-7647c56ff9-8x867\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " pod="calico-system/whisker-7647c56ff9-8x867" Apr 17 23:35:59.830237 kubelet[3435]: I0417 23:35:59.827370 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31bfe7e-863f-41d7-b162-ae069a76ee07-config-volume\") pod \"coredns-674b8bbfcf-lghm6\" (UID: \"c31bfe7e-863f-41d7-b162-ae069a76ee07\") " pod="kube-system/coredns-674b8bbfcf-lghm6" Apr 17 23:35:59.830237 kubelet[3435]: I0417 23:35:59.827408 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-nginx-config\") pod \"whisker-7647c56ff9-8x867\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " pod="calico-system/whisker-7647c56ff9-8x867" Apr 17 23:35:59.830237 kubelet[3435]: I0417 23:35:59.827454 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hqsq\" (UniqueName: \"kubernetes.io/projected/c31bfe7e-863f-41d7-b162-ae069a76ee07-kube-api-access-9hqsq\") pod \"coredns-674b8bbfcf-lghm6\" (UID: \"c31bfe7e-863f-41d7-b162-ae069a76ee07\") " pod="kube-system/coredns-674b8bbfcf-lghm6" Apr 17 23:35:59.830509 kubelet[3435]: I0417 23:35:59.827489 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/406e705d-732b-4c0e-bff1-e277744b9161-tigera-ca-bundle\") pod \"calico-kube-controllers-76fbf6d5cb-ghhvr\" (UID: \"406e705d-732b-4c0e-bff1-e277744b9161\") " pod="calico-system/calico-kube-controllers-76fbf6d5cb-ghhvr" Apr 17 23:35:59.830509 kubelet[3435]: I0417 23:35:59.827526 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvx4\" (UniqueName: \"kubernetes.io/projected/406e705d-732b-4c0e-bff1-e277744b9161-kube-api-access-7qvx4\") pod \"calico-kube-controllers-76fbf6d5cb-ghhvr\" (UID: \"406e705d-732b-4c0e-bff1-e277744b9161\") " pod="calico-system/calico-kube-controllers-76fbf6d5cb-ghhvr" Apr 17 23:35:59.851194 systemd[1]: Created slice kubepods-besteffort-pod406e705d_732b_4c0e_bff1_e277744b9161.slice - libcontainer container kubepods-besteffort-pod406e705d_732b_4c0e_bff1_e277744b9161.slice. Apr 17 23:35:59.873176 systemd[1]: Created slice kubepods-besteffort-pod20e438e6_a5f5_45d7_b808_1a4fb95924d1.slice - libcontainer container kubepods-besteffort-pod20e438e6_a5f5_45d7_b808_1a4fb95924d1.slice. Apr 17 23:35:59.892596 systemd[1]: Created slice kubepods-burstable-pod6421d27f_ddc6_4d22_86e3_4278c749f598.slice - libcontainer container kubepods-burstable-pod6421d27f_ddc6_4d22_86e3_4278c749f598.slice. Apr 17 23:35:59.916853 systemd[1]: Created slice kubepods-besteffort-pod28f397bf_652b_49aa_8829_d5327f553244.slice - libcontainer container kubepods-besteffort-pod28f397bf_652b_49aa_8829_d5327f553244.slice. Apr 17 23:35:59.930088 kubelet[3435]: I0417 23:35:59.929937 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cb8fd53d-17b0-4864-975d-aee521739f5b-calico-apiserver-certs\") pod \"calico-apiserver-574894c46f-77jgd\" (UID: \"cb8fd53d-17b0-4864-975d-aee521739f5b\") " pod="calico-system/calico-apiserver-574894c46f-77jgd" Apr 17 23:35:59.930088 kubelet[3435]: I0417 23:35:59.930069 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwg2s\" (UniqueName: \"kubernetes.io/projected/cb8fd53d-17b0-4864-975d-aee521739f5b-kube-api-access-zwg2s\") pod \"calico-apiserver-574894c46f-77jgd\" (UID: \"cb8fd53d-17b0-4864-975d-aee521739f5b\") " pod="calico-system/calico-apiserver-574894c46f-77jgd" Apr 17 23:35:59.930358 kubelet[3435]: I0417 23:35:59.930322 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f397bf-652b-49aa-8829-d5327f553244-config\") pod \"goldmane-5b85766d88-gf6r8\" (UID: \"28f397bf-652b-49aa-8829-d5327f553244\") " pod="calico-system/goldmane-5b85766d88-gf6r8" Apr 17 23:35:59.930430 kubelet[3435]: I0417 23:35:59.930386 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thh5w\" (UniqueName: \"kubernetes.io/projected/28f397bf-652b-49aa-8829-d5327f553244-kube-api-access-thh5w\") pod \"goldmane-5b85766d88-gf6r8\" (UID: \"28f397bf-652b-49aa-8829-d5327f553244\") " pod="calico-system/goldmane-5b85766d88-gf6r8" Apr 17 23:35:59.930737 kubelet[3435]: I0417 23:35:59.930523 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28f397bf-652b-49aa-8829-d5327f553244-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-gf6r8\" (UID: \"28f397bf-652b-49aa-8829-d5327f553244\") " pod="calico-system/goldmane-5b85766d88-gf6r8" Apr 17 23:35:59.930737 kubelet[3435]: I0417 23:35:59.930575 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/28f397bf-652b-49aa-8829-d5327f553244-goldmane-key-pair\") pod \"goldmane-5b85766d88-gf6r8\" (UID: \"28f397bf-652b-49aa-8829-d5327f553244\") " pod="calico-system/goldmane-5b85766d88-gf6r8" Apr 17 23:35:59.940156 systemd[1]: Created slice kubepods-besteffort-podcb8fd53d_17b0_4864_975d_aee521739f5b.slice - libcontainer container kubepods-besteffort-podcb8fd53d_17b0_4864_975d_aee521739f5b.slice. Apr 17 23:36:00.000104 systemd[1]: Created slice kubepods-besteffort-podba48e602_86c6_47be_a12c_378408003d1d.slice - libcontainer container kubepods-besteffort-podba48e602_86c6_47be_a12c_378408003d1d.slice. Apr 17 23:36:00.027105 containerd[2021]: time="2026-04-17T23:36:00.024470570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gq6hp,Uid:ba48e602-86c6-47be-a12c-378408003d1d,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:00.118363 containerd[2021]: time="2026-04-17T23:36:00.118287218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lghm6,Uid:c31bfe7e-863f-41d7-b162-ae069a76ee07,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:00.142183 containerd[2021]: time="2026-04-17T23:36:00.142008314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7647c56ff9-8x867,Uid:cb7e7a2a-af32-4537-b52a-928d1a505f9d,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:00.172020 containerd[2021]: time="2026-04-17T23:36:00.171468866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fbf6d5cb-ghhvr,Uid:406e705d-732b-4c0e-bff1-e277744b9161,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:00.187437 containerd[2021]: time="2026-04-17T23:36:00.187365722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-bcpkg,Uid:20e438e6-a5f5-45d7-b808-1a4fb95924d1,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:00.206900 containerd[2021]: time="2026-04-17T23:36:00.205865510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mh6rz,Uid:6421d27f-ddc6-4d22-86e3-4278c749f598,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:00.228297 containerd[2021]: time="2026-04-17T23:36:00.227067435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gf6r8,Uid:28f397bf-652b-49aa-8829-d5327f553244,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:00.240819 containerd[2021]: time="2026-04-17T23:36:00.239205999Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:36:00.291581 containerd[2021]: time="2026-04-17T23:36:00.291121623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-77jgd,Uid:cb8fd53d-17b0-4864-975d-aee521739f5b,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:00.569596 containerd[2021]: time="2026-04-17T23:36:00.569439520Z" level=info msg="CreateContainer within sandbox \"e4537951aaf67bc51409feb123e13a18d1ce0667727a481151eddd5e6792c0d5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"32fbb0f729a926f3f8782efbf98edb3456b05347aabbe5bee10e3d4eb21614e7\"" Apr 17 23:36:00.572814 containerd[2021]: time="2026-04-17T23:36:00.572622244Z" level=info msg="StartContainer for \"32fbb0f729a926f3f8782efbf98edb3456b05347aabbe5bee10e3d4eb21614e7\"" Apr 17 23:36:00.840162 systemd[1]: Started cri-containerd-32fbb0f729a926f3f8782efbf98edb3456b05347aabbe5bee10e3d4eb21614e7.scope - libcontainer container 32fbb0f729a926f3f8782efbf98edb3456b05347aabbe5bee10e3d4eb21614e7. Apr 17 23:36:00.991821 containerd[2021]: time="2026-04-17T23:36:00.989012514Z" level=error msg="Failed to destroy network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:00.993828 containerd[2021]: time="2026-04-17T23:36:00.992570658Z" level=error msg="Failed to destroy network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:00.997205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6-shm.mount: Deactivated successfully. Apr 17 23:36:01.008080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad-shm.mount: Deactivated successfully. Apr 17 23:36:01.012556 containerd[2021]: time="2026-04-17T23:36:01.012430454Z" level=error msg="encountered an error cleaning up failed sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.012915 containerd[2021]: time="2026-04-17T23:36:01.012548474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lghm6,Uid:c31bfe7e-863f-41d7-b162-ae069a76ee07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.013544 kubelet[3435]: E0417 23:36:01.013460 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.016909 kubelet[3435]: E0417 23:36:01.013568 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lghm6" Apr 17 23:36:01.016909 kubelet[3435]: E0417 23:36:01.013607 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lghm6" Apr 17 23:36:01.016909 kubelet[3435]: E0417 23:36:01.013710 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lghm6_kube-system(c31bfe7e-863f-41d7-b162-ae069a76ee07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lghm6_kube-system(c31bfe7e-863f-41d7-b162-ae069a76ee07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lghm6" podUID="c31bfe7e-863f-41d7-b162-ae069a76ee07" Apr 17 23:36:01.017252 containerd[2021]: time="2026-04-17T23:36:01.016716807Z" level=error msg="encountered an error cleaning up failed sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.019223 containerd[2021]: time="2026-04-17T23:36:01.018459987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gq6hp,Uid:ba48e602-86c6-47be-a12c-378408003d1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.020221 kubelet[3435]: E0417 23:36:01.019640 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.020221 kubelet[3435]: E0417 23:36:01.019725 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gq6hp" Apr 17 23:36:01.020221 kubelet[3435]: E0417 23:36:01.019964 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gq6hp" Apr 17 23:36:01.020469 kubelet[3435]: E0417 23:36:01.020086 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gq6hp_calico-system(ba48e602-86c6-47be-a12c-378408003d1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gq6hp_calico-system(ba48e602-86c6-47be-a12c-378408003d1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:36:01.045373 containerd[2021]: time="2026-04-17T23:36:01.044538627Z" level=error msg="Failed to destroy network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.046832 containerd[2021]: time="2026-04-17T23:36:01.046551411Z" level=error msg="encountered an error cleaning up failed sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.046832 containerd[2021]: time="2026-04-17T23:36:01.046652715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7647c56ff9-8x867,Uid:cb7e7a2a-af32-4537-b52a-928d1a505f9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.048277 kubelet[3435]: E0417 23:36:01.047105 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.048277 kubelet[3435]: E0417 23:36:01.047198 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7647c56ff9-8x867" Apr 17 23:36:01.048277 kubelet[3435]: E0417 23:36:01.047234 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7647c56ff9-8x867" Apr 17 23:36:01.048583 kubelet[3435]: E0417 23:36:01.047317 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7647c56ff9-8x867_calico-system(cb7e7a2a-af32-4537-b52a-928d1a505f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7647c56ff9-8x867_calico-system(cb7e7a2a-af32-4537-b52a-928d1a505f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7647c56ff9-8x867" podUID="cb7e7a2a-af32-4537-b52a-928d1a505f9d" Apr 17 23:36:01.061606 containerd[2021]: time="2026-04-17T23:36:01.061034727Z" level=error msg="Failed to destroy network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.069026 containerd[2021]: time="2026-04-17T23:36:01.068934363Z" level=error msg="encountered an error cleaning up failed sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.069180 containerd[2021]: time="2026-04-17T23:36:01.069046575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-bcpkg,Uid:20e438e6-a5f5-45d7-b808-1a4fb95924d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.069599 kubelet[3435]: E0417 23:36:01.069525 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.070918 kubelet[3435]: E0417 23:36:01.070776 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-574894c46f-bcpkg" Apr 17 23:36:01.073749 kubelet[3435]: E0417 23:36:01.070944 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-574894c46f-bcpkg" Apr 17 23:36:01.073749 kubelet[3435]: E0417 23:36:01.071102 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574894c46f-bcpkg_calico-system(20e438e6-a5f5-45d7-b808-1a4fb95924d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574894c46f-bcpkg_calico-system(20e438e6-a5f5-45d7-b808-1a4fb95924d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-574894c46f-bcpkg" podUID="20e438e6-a5f5-45d7-b808-1a4fb95924d1" Apr 17 23:36:01.091009 containerd[2021]: time="2026-04-17T23:36:01.090945603Z" level=error msg="Failed to destroy network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.091980 containerd[2021]: time="2026-04-17T23:36:01.091855071Z" level=error msg="encountered an error cleaning up failed sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.094163 containerd[2021]: time="2026-04-17T23:36:01.094075455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fbf6d5cb-ghhvr,Uid:406e705d-732b-4c0e-bff1-e277744b9161,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.095452 kubelet[3435]: E0417 23:36:01.095358 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.095452 kubelet[3435]: E0417 23:36:01.095447 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76fbf6d5cb-ghhvr" Apr 17 23:36:01.095816 kubelet[3435]: E0417 23:36:01.095484 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76fbf6d5cb-ghhvr" Apr 17 23:36:01.095816 kubelet[3435]: E0417 23:36:01.095563 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76fbf6d5cb-ghhvr_calico-system(406e705d-732b-4c0e-bff1-e277744b9161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76fbf6d5cb-ghhvr_calico-system(406e705d-732b-4c0e-bff1-e277744b9161)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76fbf6d5cb-ghhvr" podUID="406e705d-732b-4c0e-bff1-e277744b9161" Apr 17 23:36:01.113862 containerd[2021]: time="2026-04-17T23:36:01.113708739Z" level=info msg="StartContainer for \"32fbb0f729a926f3f8782efbf98edb3456b05347aabbe5bee10e3d4eb21614e7\" returns successfully" Apr 17 23:36:01.153117 containerd[2021]: time="2026-04-17T23:36:01.153048819Z" level=error msg="Failed to destroy network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.154069 containerd[2021]: time="2026-04-17T23:36:01.154001679Z" level=error msg="encountered an error cleaning up failed sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.154468 containerd[2021]: time="2026-04-17T23:36:01.154307751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mh6rz,Uid:6421d27f-ddc6-4d22-86e3-4278c749f598,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.154866 kubelet[3435]: E0417 23:36:01.154638 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.156841 kubelet[3435]: E0417 23:36:01.155875 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mh6rz" Apr 17 23:36:01.156841 kubelet[3435]: E0417 23:36:01.155959 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mh6rz" Apr 17 23:36:01.157052 kubelet[3435]: E0417 23:36:01.156562 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mh6rz_kube-system(6421d27f-ddc6-4d22-86e3-4278c749f598)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mh6rz_kube-system(6421d27f-ddc6-4d22-86e3-4278c749f598)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mh6rz" podUID="6421d27f-ddc6-4d22-86e3-4278c749f598" Apr 17 23:36:01.170575 containerd[2021]: time="2026-04-17T23:36:01.170499075Z" level=error msg="Failed to destroy network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.171518 containerd[2021]: time="2026-04-17T23:36:01.171324159Z" level=error msg="encountered an error cleaning up failed sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.171518 containerd[2021]: time="2026-04-17T23:36:01.171413079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gf6r8,Uid:28f397bf-652b-49aa-8829-d5327f553244,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.174069 kubelet[3435]: E0417 23:36:01.172272 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.174069 kubelet[3435]: E0417 23:36:01.172351 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gf6r8" Apr 17 23:36:01.174069 kubelet[3435]: E0417 23:36:01.172386 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gf6r8" Apr 17 23:36:01.174347 kubelet[3435]: E0417 23:36:01.172472 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-gf6r8_calico-system(28f397bf-652b-49aa-8829-d5327f553244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-gf6r8_calico-system(28f397bf-652b-49aa-8829-d5327f553244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-gf6r8" podUID="28f397bf-652b-49aa-8829-d5327f553244" Apr 17 23:36:01.174963 containerd[2021]: time="2026-04-17T23:36:01.174612435Z" level=error msg="Failed to destroy network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.177204 containerd[2021]: time="2026-04-17T23:36:01.176949747Z" level=error msg="encountered an error cleaning up failed sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.177204 containerd[2021]: time="2026-04-17T23:36:01.177100959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-77jgd,Uid:cb8fd53d-17b0-4864-975d-aee521739f5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.178543 kubelet[3435]: E0417 23:36:01.178026 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.178543 kubelet[3435]: E0417 23:36:01.178106 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-574894c46f-77jgd" Apr 17 23:36:01.178543 kubelet[3435]: E0417 23:36:01.178146 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-574894c46f-77jgd" Apr 17 23:36:01.178826 kubelet[3435]: E0417 23:36:01.178235 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574894c46f-77jgd_calico-system(cb8fd53d-17b0-4864-975d-aee521739f5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574894c46f-77jgd_calico-system(cb8fd53d-17b0-4864-975d-aee521739f5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-574894c46f-77jgd" podUID="cb8fd53d-17b0-4864-975d-aee521739f5b" Apr 17 23:36:01.211847 kubelet[3435]: I0417 23:36:01.208653 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:36:01.219489 containerd[2021]: time="2026-04-17T23:36:01.217315756Z" level=info msg="StopPodSandbox for \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\"" Apr 17 23:36:01.219489 containerd[2021]: time="2026-04-17T23:36:01.217650676Z" level=info msg="Ensure that sandbox 10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82 in task-service has been cleanup successfully" Apr 17 23:36:01.231810 kubelet[3435]: I0417 23:36:01.231627 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:36:01.236125 containerd[2021]: time="2026-04-17T23:36:01.235735384Z" level=info msg="StopPodSandbox for \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\"" Apr 17 23:36:01.240990 containerd[2021]: time="2026-04-17T23:36:01.240012148Z" level=info msg="Ensure that sandbox 29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3 in task-service has been cleanup successfully" Apr 17 23:36:01.271904 kubelet[3435]: I0417 23:36:01.271169 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:36:01.275912 containerd[2021]: time="2026-04-17T23:36:01.275859424Z" level=info msg="StopPodSandbox for \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\"" Apr 17 23:36:01.276414 containerd[2021]: time="2026-04-17T23:36:01.276377836Z" level=info msg="Ensure that sandbox eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6 in task-service has been cleanup successfully" Apr 17 23:36:01.285717 kubelet[3435]: I0417 23:36:01.284741 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:36:01.290017 containerd[2021]: time="2026-04-17T23:36:01.289949152Z" level=info msg="StopPodSandbox for \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\"" Apr 17 23:36:01.291462 containerd[2021]: time="2026-04-17T23:36:01.290288488Z" level=info msg="Ensure that sandbox c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b in task-service has been cleanup successfully" Apr 17 23:36:01.301549 kubelet[3435]: I0417 23:36:01.301508 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:01.305395 containerd[2021]: time="2026-04-17T23:36:01.305265808Z" level=info msg="StopPodSandbox for \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\"" Apr 17 23:36:01.305841 containerd[2021]: time="2026-04-17T23:36:01.305636032Z" level=info msg="Ensure that sandbox a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda in task-service has been cleanup successfully" Apr 17 23:36:01.320128 kubelet[3435]: I0417 23:36:01.319979 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:36:01.322832 containerd[2021]: time="2026-04-17T23:36:01.321666496Z" level=info msg="StopPodSandbox for \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\"" Apr 17 23:36:01.326780 containerd[2021]: time="2026-04-17T23:36:01.323330884Z" level=info msg="Ensure that sandbox abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7 in task-service has been cleanup successfully" Apr 17 23:36:01.345842 kubelet[3435]: I0417 23:36:01.345675 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:36:01.362264 kubelet[3435]: I0417 23:36:01.361259 3435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:36:01.371187 containerd[2021]: time="2026-04-17T23:36:01.369067996Z" level=info msg="StopPodSandbox for \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\"" Apr 17 23:36:01.371187 containerd[2021]: time="2026-04-17T23:36:01.369393712Z" level=info msg="Ensure that sandbox 789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad in task-service has been cleanup successfully" Apr 17 23:36:01.379916 containerd[2021]: time="2026-04-17T23:36:01.379757116Z" level=info msg="StopPodSandbox for \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\"" Apr 17 23:36:01.380204 containerd[2021]: time="2026-04-17T23:36:01.380151220Z" level=info msg="Ensure that sandbox 03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371 in task-service has been cleanup successfully" Apr 17 23:36:01.689487 containerd[2021]: time="2026-04-17T23:36:01.689314386Z" level=error msg="StopPodSandbox for \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\" failed" error="failed to destroy network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.690028 kubelet[3435]: E0417 23:36:01.689963 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:36:01.690305 kubelet[3435]: E0417 23:36:01.690057 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82"} Apr 17 23:36:01.690305 kubelet[3435]: E0417 23:36:01.690138 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6421d27f-ddc6-4d22-86e3-4278c749f598\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.690305 kubelet[3435]: E0417 23:36:01.690178 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6421d27f-ddc6-4d22-86e3-4278c749f598\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mh6rz" podUID="6421d27f-ddc6-4d22-86e3-4278c749f598" Apr 17 23:36:01.698471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b-shm.mount: Deactivated successfully. Apr 17 23:36:01.700152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7-shm.mount: Deactivated successfully. Apr 17 23:36:01.700326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82-shm.mount: Deactivated successfully. Apr 17 23:36:01.700494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3-shm.mount: Deactivated successfully. Apr 17 23:36:01.700633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371-shm.mount: Deactivated successfully. Apr 17 23:36:01.700816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda-shm.mount: Deactivated successfully. Apr 17 23:36:01.725906 kubelet[3435]: I0417 23:36:01.725587 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hfhp8" podStartSLOduration=5.377967348 podStartE2EDuration="22.72556419s" podCreationTimestamp="2026-04-17 23:35:39 +0000 UTC" firstStartedPulling="2026-04-17 23:35:40.31731752 +0000 UTC m=+36.883856957" lastFinishedPulling="2026-04-17 23:35:57.664914374 +0000 UTC m=+54.231453799" observedRunningTime="2026-04-17 23:36:01.278568352 +0000 UTC m=+57.845107789" watchObservedRunningTime="2026-04-17 23:36:01.72556419 +0000 UTC m=+58.292103639" Apr 17 23:36:01.760558 containerd[2021]: time="2026-04-17T23:36:01.760181502Z" level=error msg="StopPodSandbox for \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\" failed" error="failed to destroy network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.760691 kubelet[3435]: E0417 23:36:01.760484 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:36:01.760691 kubelet[3435]: E0417 23:36:01.760553 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b"} Apr 17 23:36:01.760691 kubelet[3435]: E0417 23:36:01.760609 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28f397bf-652b-49aa-8829-d5327f553244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.760691 kubelet[3435]: E0417 23:36:01.760648 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28f397bf-652b-49aa-8829-d5327f553244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-gf6r8" podUID="28f397bf-652b-49aa-8829-d5327f553244" Apr 17 23:36:01.768177 containerd[2021]: time="2026-04-17T23:36:01.767169702Z" level=error msg="StopPodSandbox for \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\" failed" error="failed to destroy network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.768463 kubelet[3435]: E0417 23:36:01.767660 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:36:01.768463 kubelet[3435]: E0417 23:36:01.767733 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3"} Apr 17 23:36:01.768463 kubelet[3435]: E0417 23:36:01.767863 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"406e705d-732b-4c0e-bff1-e277744b9161\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.768463 kubelet[3435]: E0417 23:36:01.767910 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"406e705d-732b-4c0e-bff1-e277744b9161\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76fbf6d5cb-ghhvr" podUID="406e705d-732b-4c0e-bff1-e277744b9161" Apr 17 23:36:01.787184 containerd[2021]: time="2026-04-17T23:36:01.786982038Z" level=error msg="StopPodSandbox for \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\" failed" error="failed to destroy network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.790126 kubelet[3435]: E0417 23:36:01.789265 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:01.790126 kubelet[3435]: E0417 23:36:01.789866 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda"} Apr 17 23:36:01.790126 kubelet[3435]: E0417 23:36:01.789959 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.790126 kubelet[3435]: E0417 23:36:01.790034 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7647c56ff9-8x867" podUID="cb7e7a2a-af32-4537-b52a-928d1a505f9d" Apr 17 23:36:01.815446 containerd[2021]: time="2026-04-17T23:36:01.814852470Z" level=error msg="StopPodSandbox for \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\" failed" error="failed to destroy network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.815829 kubelet[3435]: E0417 23:36:01.815188 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:36:01.815829 kubelet[3435]: E0417 23:36:01.815260 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad"} Apr 17 23:36:01.815829 kubelet[3435]: E0417 23:36:01.815314 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba48e602-86c6-47be-a12c-378408003d1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.815829 kubelet[3435]: E0417 23:36:01.815353 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba48e602-86c6-47be-a12c-378408003d1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gq6hp" podUID="ba48e602-86c6-47be-a12c-378408003d1d" Apr 17 23:36:01.820018 containerd[2021]: time="2026-04-17T23:36:01.819759871Z" level=error msg="StopPodSandbox for \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\" failed" error="failed to destroy network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.820455 kubelet[3435]: E0417 23:36:01.820120 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:36:01.820455 kubelet[3435]: E0417 23:36:01.820206 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371"} Apr 17 23:36:01.820455 kubelet[3435]: E0417 23:36:01.820260 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20e438e6-a5f5-45d7-b808-1a4fb95924d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.820455 kubelet[3435]: E0417 23:36:01.820309 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20e438e6-a5f5-45d7-b808-1a4fb95924d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-574894c46f-bcpkg" podUID="20e438e6-a5f5-45d7-b808-1a4fb95924d1" Apr 17 23:36:01.831671 containerd[2021]: time="2026-04-17T23:36:01.831288943Z" level=error msg="StopPodSandbox for \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\" failed" error="failed to destroy network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.833712 kubelet[3435]: E0417 23:36:01.832201 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:36:01.833712 kubelet[3435]: E0417 23:36:01.833436 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6"} Apr 17 23:36:01.833712 kubelet[3435]: E0417 23:36:01.833542 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c31bfe7e-863f-41d7-b162-ae069a76ee07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.833712 kubelet[3435]: E0417 23:36:01.833625 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c31bfe7e-863f-41d7-b162-ae069a76ee07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lghm6" podUID="c31bfe7e-863f-41d7-b162-ae069a76ee07" Apr 17 23:36:01.843172 containerd[2021]: time="2026-04-17T23:36:01.842961559Z" level=error msg="StopPodSandbox for \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\" failed" error="failed to destroy network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:36:01.843446 kubelet[3435]: E0417 23:36:01.843327 3435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:36:01.843446 kubelet[3435]: E0417 23:36:01.843398 3435 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7"} Apr 17 23:36:01.843625 kubelet[3435]: E0417 23:36:01.843458 3435 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb8fd53d-17b0-4864-975d-aee521739f5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:36:01.843625 kubelet[3435]: E0417 23:36:01.843499 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb8fd53d-17b0-4864-975d-aee521739f5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-574894c46f-77jgd" podUID="cb8fd53d-17b0-4864-975d-aee521739f5b" Apr 17 23:36:02.365814 containerd[2021]: time="2026-04-17T23:36:02.365678705Z" level=info msg="StopPodSandbox for \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\"" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.532 [INFO][4773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.533 [INFO][4773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" iface="eth0" netns="/var/run/netns/cni-34e932fa-88c4-955f-1764-37c2a079480c" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.534 [INFO][4773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" iface="eth0" netns="/var/run/netns/cni-34e932fa-88c4-955f-1764-37c2a079480c" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.535 [INFO][4773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" iface="eth0" netns="/var/run/netns/cni-34e932fa-88c4-955f-1764-37c2a079480c" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.535 [INFO][4773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.535 [INFO][4773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.613 [INFO][4795] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.614 [INFO][4795] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.614 [INFO][4795] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.629 [WARNING][4795] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.629 [INFO][4795] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.632 [INFO][4795] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:02.644910 containerd[2021]: 2026-04-17 23:36:02.639 [INFO][4773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:02.644910 containerd[2021]: time="2026-04-17T23:36:02.642985795Z" level=info msg="TearDown network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\" successfully" Apr 17 23:36:02.644910 containerd[2021]: time="2026-04-17T23:36:02.643031179Z" level=info msg="StopPodSandbox for \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\" returns successfully" Apr 17 23:36:02.649943 systemd[1]: run-netns-cni\x2d34e932fa\x2d88c4\x2d955f\x2d1764\x2d37c2a079480c.mount: Deactivated successfully. Apr 17 23:36:02.774248 kubelet[3435]: I0417 23:36:02.774175 3435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-ca-bundle\") pod \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " Apr 17 23:36:02.774884 kubelet[3435]: I0417 23:36:02.774265 3435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtqxg\" (UniqueName: \"kubernetes.io/projected/cb7e7a2a-af32-4537-b52a-928d1a505f9d-kube-api-access-xtqxg\") pod \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " Apr 17 23:36:02.774884 kubelet[3435]: I0417 23:36:02.774322 3435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-backend-key-pair\") pod \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " Apr 17 23:36:02.774884 kubelet[3435]: I0417 23:36:02.774365 3435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-nginx-config\") pod \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\" (UID: \"cb7e7a2a-af32-4537-b52a-928d1a505f9d\") " Apr 17 23:36:02.775140 kubelet[3435]: I0417 23:36:02.775078 3435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "cb7e7a2a-af32-4537-b52a-928d1a505f9d" (UID: "cb7e7a2a-af32-4537-b52a-928d1a505f9d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:36:02.775719 kubelet[3435]: I0417 23:36:02.775668 3435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cb7e7a2a-af32-4537-b52a-928d1a505f9d" (UID: "cb7e7a2a-af32-4537-b52a-928d1a505f9d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:36:02.783201 kubelet[3435]: I0417 23:36:02.783128 3435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7e7a2a-af32-4537-b52a-928d1a505f9d-kube-api-access-xtqxg" (OuterVolumeSpecName: "kube-api-access-xtqxg") pod "cb7e7a2a-af32-4537-b52a-928d1a505f9d" (UID: "cb7e7a2a-af32-4537-b52a-928d1a505f9d"). InnerVolumeSpecName "kube-api-access-xtqxg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:36:02.787040 systemd[1]: var-lib-kubelet-pods-cb7e7a2a\x2daf32\x2d4537\x2db52a\x2d928d1a505f9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxtqxg.mount: Deactivated successfully. Apr 17 23:36:02.788356 kubelet[3435]: I0417 23:36:02.788175 3435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cb7e7a2a-af32-4537-b52a-928d1a505f9d" (UID: "cb7e7a2a-af32-4537-b52a-928d1a505f9d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:36:02.794679 systemd[1]: var-lib-kubelet-pods-cb7e7a2a\x2daf32\x2d4537\x2db52a\x2d928d1a505f9d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:36:02.875774 kubelet[3435]: I0417 23:36:02.875705 3435 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-ca-bundle\") on node \"ip-172-31-27-239\" DevicePath \"\"" Apr 17 23:36:02.875774 kubelet[3435]: I0417 23:36:02.875762 3435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xtqxg\" (UniqueName: \"kubernetes.io/projected/cb7e7a2a-af32-4537-b52a-928d1a505f9d-kube-api-access-xtqxg\") on node \"ip-172-31-27-239\" DevicePath \"\"" Apr 17 23:36:02.875982 kubelet[3435]: I0417 23:36:02.875819 3435 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb7e7a2a-af32-4537-b52a-928d1a505f9d-whisker-backend-key-pair\") on node \"ip-172-31-27-239\" DevicePath \"\"" Apr 17 23:36:02.875982 kubelet[3435]: I0417 23:36:02.875846 3435 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cb7e7a2a-af32-4537-b52a-928d1a505f9d-nginx-config\") on node \"ip-172-31-27-239\" DevicePath \"\"" Apr 17 23:36:03.392400 systemd[1]: Removed slice kubepods-besteffort-podcb7e7a2a_af32_4537_b52a_928d1a505f9d.slice - libcontainer container kubepods-besteffort-podcb7e7a2a_af32_4537_b52a_928d1a505f9d.slice. Apr 17 23:36:03.512294 systemd[1]: Created slice kubepods-besteffort-pod8ff7a992_1891_4cbb_ae69_24a6382559ab.slice - libcontainer container kubepods-besteffort-pod8ff7a992_1891_4cbb_ae69_24a6382559ab.slice. Apr 17 23:36:03.582913 kubelet[3435]: I0417 23:36:03.582846 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ff7a992-1891-4cbb-ae69-24a6382559ab-whisker-backend-key-pair\") pod \"whisker-cf7567b96-fb6x7\" (UID: \"8ff7a992-1891-4cbb-ae69-24a6382559ab\") " pod="calico-system/whisker-cf7567b96-fb6x7" Apr 17 23:36:03.583077 kubelet[3435]: I0417 23:36:03.582937 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ff7a992-1891-4cbb-ae69-24a6382559ab-whisker-ca-bundle\") pod \"whisker-cf7567b96-fb6x7\" (UID: \"8ff7a992-1891-4cbb-ae69-24a6382559ab\") " pod="calico-system/whisker-cf7567b96-fb6x7" Apr 17 23:36:03.583077 kubelet[3435]: I0417 23:36:03.582990 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsqt\" (UniqueName: \"kubernetes.io/projected/8ff7a992-1891-4cbb-ae69-24a6382559ab-kube-api-access-jnsqt\") pod \"whisker-cf7567b96-fb6x7\" (UID: \"8ff7a992-1891-4cbb-ae69-24a6382559ab\") " pod="calico-system/whisker-cf7567b96-fb6x7" Apr 17 23:36:03.583077 kubelet[3435]: I0417 23:36:03.583035 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8ff7a992-1891-4cbb-ae69-24a6382559ab-nginx-config\") pod \"whisker-cf7567b96-fb6x7\" (UID: \"8ff7a992-1891-4cbb-ae69-24a6382559ab\") " pod="calico-system/whisker-cf7567b96-fb6x7" Apr 17 23:36:03.745393 containerd[2021]: time="2026-04-17T23:36:03.745256048Z" level=info msg="StopPodSandbox for \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\"" Apr 17 23:36:03.822177 containerd[2021]: time="2026-04-17T23:36:03.821760032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf7567b96-fb6x7,Uid:8ff7a992-1891-4cbb-ae69-24a6382559ab,Namespace:calico-system,Attempt:0,}" Apr 17 23:36:03.850520 kubelet[3435]: I0417 23:36:03.848775 3435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7e7a2a-af32-4537-b52a-928d1a505f9d" path="/var/lib/kubelet/pods/cb7e7a2a-af32-4537-b52a-928d1a505f9d/volumes" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.900 [WARNING][4906] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.900 [INFO][4906] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.900 [INFO][4906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" iface="eth0" netns="" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.900 [INFO][4906] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.900 [INFO][4906] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.973 [INFO][4925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.974 [INFO][4925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:03.974 [INFO][4925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:04.004 [WARNING][4925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:04.007 [INFO][4925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:04.015 [INFO][4925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:04.030378 containerd[2021]: 2026-04-17 23:36:04.023 [INFO][4906] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.032404 containerd[2021]: time="2026-04-17T23:36:04.030362021Z" level=info msg="TearDown network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\" successfully" Apr 17 23:36:04.032404 containerd[2021]: time="2026-04-17T23:36:04.030400601Z" level=info msg="StopPodSandbox for \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\" returns successfully" Apr 17 23:36:04.035717 containerd[2021]: time="2026-04-17T23:36:04.035636142Z" level=info msg="RemovePodSandbox for \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\"" Apr 17 23:36:04.035717 containerd[2021]: time="2026-04-17T23:36:04.035702082Z" level=info msg="Forcibly stopping sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\"" Apr 17 23:36:04.362727 systemd-networkd[1925]: calif82ed82d077: Link UP Apr 17 23:36:04.369347 (udev-worker)[4971]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:04.372814 systemd-networkd[1925]: calif82ed82d077: Gained carrier Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.150 [WARNING][4943] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.151 [INFO][4943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.152 [INFO][4943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" iface="eth0" netns="" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.153 [INFO][4943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.153 [INFO][4943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.267 [INFO][4959] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.269 [INFO][4959] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.296 [INFO][4959] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.354 [WARNING][4959] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.354 [INFO][4959] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" HandleID="k8s-pod-network.a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Workload="ip--172--31--27--239-k8s-whisker--7647c56ff9--8x867-eth0" Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.377 [INFO][4959] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:04.439385 containerd[2021]: 2026-04-17 23:36:04.414 [INFO][4943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda" Apr 17 23:36:04.439385 containerd[2021]: time="2026-04-17T23:36:04.436964132Z" level=info msg="TearDown network for sandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\" successfully" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.002 [ERROR][4914] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.058 [INFO][4914] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0 whisker-cf7567b96- calico-system 8ff7a992-1891-4cbb-ae69-24a6382559ab 980 0 2026-04-17 23:36:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cf7567b96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-27-239 whisker-cf7567b96-fb6x7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif82ed82d077 [] [] }} ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.058 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.154 [INFO][4950] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" HandleID="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Workload="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.200 [INFO][4950] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" HandleID="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Workload="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001021a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-239", "pod":"whisker-cf7567b96-fb6x7", "timestamp":"2026-04-17 23:36:04.154566342 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002b2580)} Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.200 [INFO][4950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.201 [INFO][4950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.201 [INFO][4950] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.207 [INFO][4950] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.219 [INFO][4950] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.237 [INFO][4950] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.244 [INFO][4950] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.250 [INFO][4950] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.250 [INFO][4950] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.255 [INFO][4950] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.270 [INFO][4950] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.290 [INFO][4950] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.65/26] block=192.168.35.64/26 handle="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.290 [INFO][4950] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.65/26] handle="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" host="ip-172-31-27-239" Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.292 [INFO][4950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:04.460499 containerd[2021]: 2026-04-17 23:36:04.292 [INFO][4950] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.65/26] IPv6=[] ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" HandleID="k8s-pod-network.2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Workload="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" Apr 17 23:36:04.465705 containerd[2021]: 2026-04-17 23:36:04.309 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0", GenerateName:"whisker-cf7567b96-", Namespace:"calico-system", SelfLink:"", UID:"8ff7a992-1891-4cbb-ae69-24a6382559ab", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cf7567b96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"whisker-cf7567b96-fb6x7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif82ed82d077", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:04.465705 containerd[2021]: 2026-04-17 23:36:04.311 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.65/32] ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" Apr 17 23:36:04.465705 containerd[2021]: 2026-04-17 23:36:04.311 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif82ed82d077 ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" Apr 17 23:36:04.465705 containerd[2021]: 2026-04-17 23:36:04.389 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" Apr 17 23:36:04.465705 containerd[2021]: 2026-04-17 23:36:04.390 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0", GenerateName:"whisker-cf7567b96-", Namespace:"calico-system", SelfLink:"", UID:"8ff7a992-1891-4cbb-ae69-24a6382559ab", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cf7567b96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c", Pod:"whisker-cf7567b96-fb6x7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif82ed82d077", MAC:"8a:b2:b2:ee:24:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:04.465705 containerd[2021]: 2026-04-17 23:36:04.421 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c" Namespace="calico-system" Pod="whisker-cf7567b96-fb6x7" WorkloadEndpoint="ip--172--31--27--239-k8s-whisker--cf7567b96--fb6x7-eth0" Apr 17 23:36:04.465705 containerd[2021]: time="2026-04-17T23:36:04.463464716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:36:04.465705 containerd[2021]: time="2026-04-17T23:36:04.463558064Z" level=info msg="RemovePodSandbox \"a6cc44d1ee7c1f0bb3e96899362b8e476c0a6985d430cde360a624da0ba0adda\" returns successfully" Apr 17 23:36:04.549837 containerd[2021]: time="2026-04-17T23:36:04.549131132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:04.549837 containerd[2021]: time="2026-04-17T23:36:04.549269708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:04.549837 containerd[2021]: time="2026-04-17T23:36:04.549353276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:04.551818 containerd[2021]: time="2026-04-17T23:36:04.550570112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:04.610254 systemd[1]: Started cri-containerd-2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c.scope - libcontainer container 2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c. Apr 17 23:36:04.694030 containerd[2021]: time="2026-04-17T23:36:04.693962889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf7567b96-fb6x7,Uid:8ff7a992-1891-4cbb-ae69-24a6382559ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c\"" Apr 17 23:36:04.717847 kernel: calico-node[4849]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:36:04.719687 containerd[2021]: time="2026-04-17T23:36:04.719463705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:36:05.526173 systemd-networkd[1925]: vxlan.calico: Link UP Apr 17 23:36:05.526193 systemd-networkd[1925]: vxlan.calico: Gained carrier Apr 17 23:36:05.575643 (udev-worker)[4970]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:05.967753 systemd-networkd[1925]: calif82ed82d077: Gained IPv6LL Apr 17 23:36:06.341365 containerd[2021]: time="2026-04-17T23:36:06.341212341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:06.343647 containerd[2021]: time="2026-04-17T23:36:06.343526733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 17 23:36:06.346633 containerd[2021]: time="2026-04-17T23:36:06.346544865Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:06.351891 containerd[2021]: time="2026-04-17T23:36:06.351767925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:06.353481 containerd[2021]: time="2026-04-17T23:36:06.353432613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.633643636s" Apr 17 23:36:06.354355 containerd[2021]: time="2026-04-17T23:36:06.353635173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 17 23:36:06.365260 containerd[2021]: time="2026-04-17T23:36:06.365201505Z" level=info msg="CreateContainer within sandbox \"2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:36:06.397361 containerd[2021]: time="2026-04-17T23:36:06.397175121Z" level=info msg="CreateContainer within sandbox \"2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1800ec21882852ef12456ce06cc4f2cf9dc68e4947867210399c97723f1251ce\"" Apr 17 23:36:06.399035 containerd[2021]: time="2026-04-17T23:36:06.398050281Z" level=info msg="StartContainer for \"1800ec21882852ef12456ce06cc4f2cf9dc68e4947867210399c97723f1251ce\"" Apr 17 23:36:06.465411 systemd[1]: run-containerd-runc-k8s.io-1800ec21882852ef12456ce06cc4f2cf9dc68e4947867210399c97723f1251ce-runc.zF0QF1.mount: Deactivated successfully. Apr 17 23:36:06.476147 systemd[1]: Started cri-containerd-1800ec21882852ef12456ce06cc4f2cf9dc68e4947867210399c97723f1251ce.scope - libcontainer container 1800ec21882852ef12456ce06cc4f2cf9dc68e4947867210399c97723f1251ce. Apr 17 23:36:06.563086 containerd[2021]: time="2026-04-17T23:36:06.562461478Z" level=info msg="StartContainer for \"1800ec21882852ef12456ce06cc4f2cf9dc68e4947867210399c97723f1251ce\" returns successfully" Apr 17 23:36:06.567443 containerd[2021]: time="2026-04-17T23:36:06.567390730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:36:07.245572 systemd-networkd[1925]: vxlan.calico: Gained IPv6LL Apr 17 23:36:08.508948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351820237.mount: Deactivated successfully. Apr 17 23:36:08.553706 containerd[2021]: time="2026-04-17T23:36:08.553628724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:08.555827 containerd[2021]: time="2026-04-17T23:36:08.555646272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 17 23:36:08.558200 containerd[2021]: time="2026-04-17T23:36:08.558122004Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:08.564413 containerd[2021]: time="2026-04-17T23:36:08.564328932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:08.566867 containerd[2021]: time="2026-04-17T23:36:08.566672040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.99893757s" Apr 17 23:36:08.566867 containerd[2021]: time="2026-04-17T23:36:08.566759892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 17 23:36:08.577202 containerd[2021]: time="2026-04-17T23:36:08.577142712Z" level=info msg="CreateContainer within sandbox \"2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:36:08.610414 containerd[2021]: time="2026-04-17T23:36:08.610329816Z" level=info msg="CreateContainer within sandbox \"2f6a4793e6ada054cc4b02b5c4ff2c7b1a3f0f820d87ca26fa3eb9c3e1f7e59c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0980180cb7a4bae9abb1135cbfee320d2f6cb064e259f2d038ba10f658f8e5b2\"" Apr 17 23:36:08.612847 containerd[2021]: time="2026-04-17T23:36:08.611434092Z" level=info msg="StartContainer for \"0980180cb7a4bae9abb1135cbfee320d2f6cb064e259f2d038ba10f658f8e5b2\"" Apr 17 23:36:08.682111 systemd[1]: Started cri-containerd-0980180cb7a4bae9abb1135cbfee320d2f6cb064e259f2d038ba10f658f8e5b2.scope - libcontainer container 0980180cb7a4bae9abb1135cbfee320d2f6cb064e259f2d038ba10f658f8e5b2. Apr 17 23:36:08.752511 containerd[2021]: time="2026-04-17T23:36:08.752369857Z" level=info msg="StartContainer for \"0980180cb7a4bae9abb1135cbfee320d2f6cb064e259f2d038ba10f658f8e5b2\" returns successfully" Apr 17 23:36:09.472486 kubelet[3435]: I0417 23:36:09.472391 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cf7567b96-fb6x7" podStartSLOduration=2.610627198 podStartE2EDuration="6.472362709s" podCreationTimestamp="2026-04-17 23:36:03 +0000 UTC" firstStartedPulling="2026-04-17 23:36:04.707107173 +0000 UTC m=+61.273646610" lastFinishedPulling="2026-04-17 23:36:08.568842696 +0000 UTC m=+65.135382121" observedRunningTime="2026-04-17 23:36:09.471055033 +0000 UTC m=+66.037594470" watchObservedRunningTime="2026-04-17 23:36:09.472362709 +0000 UTC m=+66.038902134" Apr 17 23:36:09.968228 ntpd[1991]: Listen normally on 8 vxlan.calico 192.168.35.64:123 Apr 17 23:36:09.969093 ntpd[1991]: 17 Apr 23:36:09 ntpd[1991]: Listen normally on 8 vxlan.calico 192.168.35.64:123 Apr 17 23:36:09.969093 ntpd[1991]: 17 Apr 23:36:09 ntpd[1991]: Listen normally on 9 calif82ed82d077 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:36:09.969093 ntpd[1991]: 17 Apr 23:36:09 ntpd[1991]: Listen normally on 10 vxlan.calico [fe80::64d4:d3ff:fe83:4d0c%5]:123 Apr 17 23:36:09.968364 ntpd[1991]: Listen normally on 9 calif82ed82d077 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:36:09.968446 ntpd[1991]: Listen normally on 10 vxlan.calico [fe80::64d4:d3ff:fe83:4d0c%5]:123 Apr 17 23:36:11.180336 systemd[1]: Started sshd@7-172.31.27.239:22-4.175.71.9:43638.service - OpenSSH per-connection server daemon (4.175.71.9:43638). Apr 17 23:36:12.186321 sshd[5250]: Accepted publickey for core from 4.175.71.9 port 43638 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:12.190407 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:12.198111 systemd-logind[1998]: New session 8 of user core. Apr 17 23:36:12.208073 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:36:12.829386 containerd[2021]: time="2026-04-17T23:36:12.829265057Z" level=info msg="StopPodSandbox for \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\"" Apr 17 23:36:12.832816 containerd[2021]: time="2026-04-17T23:36:12.830283509Z" level=info msg="StopPodSandbox for \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\"" Apr 17 23:36:12.835653 containerd[2021]: time="2026-04-17T23:36:12.835478105Z" level=info msg="StopPodSandbox for \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\"" Apr 17 23:36:12.837083 containerd[2021]: time="2026-04-17T23:36:12.836517725Z" level=info msg="StopPodSandbox for \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\"" Apr 17 23:36:13.039169 sshd[5250]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:13.057088 systemd[1]: sshd@7-172.31.27.239:22-4.175.71.9:43638.service: Deactivated successfully. Apr 17 23:36:13.067924 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:36:13.073722 systemd-logind[1998]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:36:13.079003 systemd-logind[1998]: Removed session 8. Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.046 [INFO][5299] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.048 [INFO][5299] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" iface="eth0" netns="/var/run/netns/cni-3d9b1b02-4317-1335-10b6-13ed700dddff" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.048 [INFO][5299] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" iface="eth0" netns="/var/run/netns/cni-3d9b1b02-4317-1335-10b6-13ed700dddff" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.053 [INFO][5299] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" iface="eth0" netns="/var/run/netns/cni-3d9b1b02-4317-1335-10b6-13ed700dddff" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.054 [INFO][5299] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.054 [INFO][5299] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.176 [INFO][5322] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.177 [INFO][5322] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.177 [INFO][5322] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.223 [WARNING][5322] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.223 [INFO][5322] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.235 [INFO][5322] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:13.271719 containerd[2021]: 2026-04-17 23:36:13.257 [INFO][5299] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:36:13.282149 systemd[1]: run-netns-cni\x2d3d9b1b02\x2d4317\x2d1335\x2d10b6\x2d13ed700dddff.mount: Deactivated successfully. Apr 17 23:36:13.285724 containerd[2021]: time="2026-04-17T23:36:13.285519159Z" level=info msg="TearDown network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\" successfully" Apr 17 23:36:13.285724 containerd[2021]: time="2026-04-17T23:36:13.285569655Z" level=info msg="StopPodSandbox for \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\" returns successfully" Apr 17 23:36:13.289592 containerd[2021]: time="2026-04-17T23:36:13.288380343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-77jgd,Uid:cb8fd53d-17b0-4864-975d-aee521739f5b,Namespace:calico-system,Attempt:1,}" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.145 [INFO][5291] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.147 [INFO][5291] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" iface="eth0" netns="/var/run/netns/cni-9985810b-6741-5dfb-77dc-ca051122cf32" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.150 [INFO][5291] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" iface="eth0" netns="/var/run/netns/cni-9985810b-6741-5dfb-77dc-ca051122cf32" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.152 [INFO][5291] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" iface="eth0" netns="/var/run/netns/cni-9985810b-6741-5dfb-77dc-ca051122cf32" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.152 [INFO][5291] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.153 [INFO][5291] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.245 [INFO][5337] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.246 [INFO][5337] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.246 [INFO][5337] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.269 [WARNING][5337] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.269 [INFO][5337] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.279 [INFO][5337] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:13.303284 containerd[2021]: 2026-04-17 23:36:13.295 [INFO][5291] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:36:13.307027 containerd[2021]: time="2026-04-17T23:36:13.306206272Z" level=info msg="TearDown network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\" successfully" Apr 17 23:36:13.307027 containerd[2021]: time="2026-04-17T23:36:13.306389524Z" level=info msg="StopPodSandbox for \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\" returns successfully" Apr 17 23:36:13.312261 systemd[1]: run-netns-cni\x2d9985810b\x2d6741\x2d5dfb\x2d77dc\x2dca051122cf32.mount: Deactivated successfully. Apr 17 23:36:13.314299 containerd[2021]: time="2026-04-17T23:36:13.313735180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-bcpkg,Uid:20e438e6-a5f5-45d7-b808-1a4fb95924d1,Namespace:calico-system,Attempt:1,}" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.140 [INFO][5300] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.140 [INFO][5300] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" iface="eth0" netns="/var/run/netns/cni-3a6c0002-7add-a05b-e5cd-797b86006311" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.140 [INFO][5300] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" iface="eth0" netns="/var/run/netns/cni-3a6c0002-7add-a05b-e5cd-797b86006311" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.144 [INFO][5300] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" iface="eth0" netns="/var/run/netns/cni-3a6c0002-7add-a05b-e5cd-797b86006311" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.144 [INFO][5300] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.145 [INFO][5300] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.332 [INFO][5331] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.333 [INFO][5331] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.333 [INFO][5331] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.377 [WARNING][5331] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.377 [INFO][5331] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.391 [INFO][5331] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:13.422405 containerd[2021]: 2026-04-17 23:36:13.409 [INFO][5300] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:36:13.425836 containerd[2021]: time="2026-04-17T23:36:13.424905172Z" level=info msg="TearDown network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\" successfully" Apr 17 23:36:13.425836 containerd[2021]: time="2026-04-17T23:36:13.424960708Z" level=info msg="StopPodSandbox for \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\" returns successfully" Apr 17 23:36:13.433848 containerd[2021]: time="2026-04-17T23:36:13.433277200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fbf6d5cb-ghhvr,Uid:406e705d-732b-4c0e-bff1-e277744b9161,Namespace:calico-system,Attempt:1,}" Apr 17 23:36:13.433975 systemd[1]: run-netns-cni\x2d3a6c0002\x2d7add\x2da05b\x2de5cd\x2d797b86006311.mount: Deactivated successfully. Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.162 [INFO][5298] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.165 [INFO][5298] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" iface="eth0" netns="/var/run/netns/cni-4a200ac4-34be-bb9e-c9e4-8fe481158a1f" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.169 [INFO][5298] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" iface="eth0" netns="/var/run/netns/cni-4a200ac4-34be-bb9e-c9e4-8fe481158a1f" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.172 [INFO][5298] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" iface="eth0" netns="/var/run/netns/cni-4a200ac4-34be-bb9e-c9e4-8fe481158a1f" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.172 [INFO][5298] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.172 [INFO][5298] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.339 [INFO][5340] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.339 [INFO][5340] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.391 [INFO][5340] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.422 [WARNING][5340] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.422 [INFO][5340] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.430 [INFO][5340] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:13.450171 containerd[2021]: 2026-04-17 23:36:13.445 [INFO][5298] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:36:13.452430 containerd[2021]: time="2026-04-17T23:36:13.450479332Z" level=info msg="TearDown network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\" successfully" Apr 17 23:36:13.452430 containerd[2021]: time="2026-04-17T23:36:13.450608080Z" level=info msg="StopPodSandbox for \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\" returns successfully" Apr 17 23:36:13.453886 containerd[2021]: time="2026-04-17T23:36:13.453026212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lghm6,Uid:c31bfe7e-863f-41d7-b162-ae069a76ee07,Namespace:kube-system,Attempt:1,}" Apr 17 23:36:13.829594 systemd-networkd[1925]: calif1c70744b6a: Link UP Apr 17 23:36:13.831622 systemd-networkd[1925]: calif1c70744b6a: Gained carrier Apr 17 23:36:13.849331 (udev-worker)[5430]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.518 [INFO][5357] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0 calico-apiserver-574894c46f- calico-system 20e438e6-a5f5-45d7-b808-1a4fb95924d1 1068 0 2026-04-17 23:35:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574894c46f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-239 calico-apiserver-574894c46f-bcpkg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif1c70744b6a [] [] }} ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.519 [INFO][5357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.687 [INFO][5399] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" HandleID="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.722 [INFO][5399] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" HandleID="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034d9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-239", "pod":"calico-apiserver-574894c46f-bcpkg", "timestamp":"2026-04-17 23:36:13.687379853 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40005758c0)} Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.723 [INFO][5399] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.723 [INFO][5399] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.723 [INFO][5399] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.735 [INFO][5399] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.753 [INFO][5399] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.766 [INFO][5399] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.773 [INFO][5399] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.782 [INFO][5399] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.782 [INFO][5399] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.786 [INFO][5399] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3 Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.797 [INFO][5399] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.814 [INFO][5399] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.66/26] block=192.168.35.64/26 handle="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.814 [INFO][5399] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.66/26] handle="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" host="ip-172-31-27-239" Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.814 [INFO][5399] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:13.884895 containerd[2021]: 2026-04-17 23:36:13.814 [INFO][5399] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.66/26] IPv6=[] ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" HandleID="k8s-pod-network.9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.894958 containerd[2021]: 2026-04-17 23:36:13.820 [INFO][5357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"20e438e6-a5f5-45d7-b808-1a4fb95924d1", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"calico-apiserver-574894c46f-bcpkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif1c70744b6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:13.894958 containerd[2021]: 2026-04-17 23:36:13.821 [INFO][5357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.66/32] ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.894958 containerd[2021]: 2026-04-17 23:36:13.821 [INFO][5357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1c70744b6a ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.894958 containerd[2021]: 2026-04-17 23:36:13.833 [INFO][5357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.894958 containerd[2021]: 2026-04-17 23:36:13.838 [INFO][5357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"20e438e6-a5f5-45d7-b808-1a4fb95924d1", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3", Pod:"calico-apiserver-574894c46f-bcpkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif1c70744b6a", MAC:"6a:1a:85:d9:9e:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:13.894958 containerd[2021]: 2026-04-17 23:36:13.871 [INFO][5357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3" Namespace="calico-system" Pod="calico-apiserver-574894c46f-bcpkg" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:36:13.988028 containerd[2021]: time="2026-04-17T23:36:13.986196019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:13.988028 containerd[2021]: time="2026-04-17T23:36:13.986313703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:13.988028 containerd[2021]: time="2026-04-17T23:36:13.986343643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:13.988028 containerd[2021]: time="2026-04-17T23:36:13.986540455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:13.992536 (udev-worker)[5435]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:14.007635 systemd-networkd[1925]: cali26c9cc77021: Link UP Apr 17 23:36:14.014572 systemd-networkd[1925]: cali26c9cc77021: Gained carrier Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.577 [INFO][5354] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0 calico-apiserver-574894c46f- calico-system cb8fd53d-17b0-4864-975d-aee521739f5b 1064 0 2026-04-17 23:35:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574894c46f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-239 calico-apiserver-574894c46f-77jgd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali26c9cc77021 [] [] }} ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.578 [INFO][5354] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.695 [INFO][5407] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" HandleID="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.731 [INFO][5407] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" HandleID="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cc60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-239", "pod":"calico-apiserver-574894c46f-77jgd", "timestamp":"2026-04-17 23:36:13.695751257 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400018cc60)} Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.732 [INFO][5407] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.815 [INFO][5407] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.815 [INFO][5407] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.836 [INFO][5407] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.856 [INFO][5407] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.879 [INFO][5407] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.887 [INFO][5407] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.903 [INFO][5407] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.907 [INFO][5407] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.925 [INFO][5407] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.948 [INFO][5407] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.962 [INFO][5407] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.67/26] block=192.168.35.64/26 handle="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.963 [INFO][5407] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.67/26] handle="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" host="ip-172-31-27-239" Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.963 [INFO][5407] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:14.082418 containerd[2021]: 2026-04-17 23:36:13.963 [INFO][5407] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.67/26] IPv6=[] ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" HandleID="k8s-pod-network.848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:14.083523 containerd[2021]: 2026-04-17 23:36:13.977 [INFO][5354] cni-plugin/k8s.go 418: Populated endpoint ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"cb8fd53d-17b0-4864-975d-aee521739f5b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"calico-apiserver-574894c46f-77jgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali26c9cc77021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:14.083523 containerd[2021]: 2026-04-17 23:36:13.979 [INFO][5354] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.67/32] ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:14.083523 containerd[2021]: 2026-04-17 23:36:13.979 [INFO][5354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26c9cc77021 ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:14.083523 containerd[2021]: 2026-04-17 23:36:14.022 [INFO][5354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:14.083523 containerd[2021]: 2026-04-17 23:36:14.040 [INFO][5354] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"cb8fd53d-17b0-4864-975d-aee521739f5b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a", Pod:"calico-apiserver-574894c46f-77jgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali26c9cc77021", MAC:"e2:2a:c0:21:d5:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:14.083523 containerd[2021]: 2026-04-17 23:36:14.069 [INFO][5354] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a" Namespace="calico-system" Pod="calico-apiserver-574894c46f-77jgd" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:36:14.132374 systemd[1]: Started cri-containerd-9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3.scope - libcontainer container 9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3. Apr 17 23:36:14.156724 systemd-networkd[1925]: calic8b6d65e34b: Link UP Apr 17 23:36:14.160344 systemd-networkd[1925]: calic8b6d65e34b: Gained carrier Apr 17 23:36:14.218193 containerd[2021]: time="2026-04-17T23:36:14.217661632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:14.218193 containerd[2021]: time="2026-04-17T23:36:14.217851820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:14.219885 containerd[2021]: time="2026-04-17T23:36:14.218466964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:14.219885 containerd[2021]: time="2026-04-17T23:36:14.218726092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.626 [INFO][5378] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0 calico-kube-controllers-76fbf6d5cb- calico-system 406e705d-732b-4c0e-bff1-e277744b9161 1067 0 2026-04-17 23:35:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76fbf6d5cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-239 calico-kube-controllers-76fbf6d5cb-ghhvr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic8b6d65e34b [] [] }} ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.627 [INFO][5378] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.764 [INFO][5414] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" HandleID="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.796 [INFO][5414] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" HandleID="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122980), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-239", "pod":"calico-kube-controllers-76fbf6d5cb-ghhvr", "timestamp":"2026-04-17 23:36:13.764900322 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186840)} Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.796 [INFO][5414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.963 [INFO][5414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.964 [INFO][5414] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.971 [INFO][5414] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:13.993 [INFO][5414] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.033 [INFO][5414] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.043 [INFO][5414] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.056 [INFO][5414] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.056 [INFO][5414] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.062 [INFO][5414] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0 Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.076 [INFO][5414] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.105 [INFO][5414] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.68/26] block=192.168.35.64/26 handle="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.108 [INFO][5414] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.68/26] handle="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" host="ip-172-31-27-239" Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.108 [INFO][5414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:14.239161 containerd[2021]: 2026-04-17 23:36:14.108 [INFO][5414] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.68/26] IPv6=[] ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" HandleID="k8s-pod-network.5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:14.243527 containerd[2021]: 2026-04-17 23:36:14.139 [INFO][5378] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0", GenerateName:"calico-kube-controllers-76fbf6d5cb-", Namespace:"calico-system", SelfLink:"", UID:"406e705d-732b-4c0e-bff1-e277744b9161", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fbf6d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"calico-kube-controllers-76fbf6d5cb-ghhvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8b6d65e34b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:14.243527 containerd[2021]: 2026-04-17 23:36:14.139 [INFO][5378] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.68/32] ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:14.243527 containerd[2021]: 2026-04-17 23:36:14.139 [INFO][5378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8b6d65e34b ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:14.243527 containerd[2021]: 2026-04-17 23:36:14.172 [INFO][5378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:14.243527 containerd[2021]: 2026-04-17 23:36:14.178 [INFO][5378] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0", GenerateName:"calico-kube-controllers-76fbf6d5cb-", Namespace:"calico-system", SelfLink:"", UID:"406e705d-732b-4c0e-bff1-e277744b9161", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fbf6d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0", Pod:"calico-kube-controllers-76fbf6d5cb-ghhvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8b6d65e34b", MAC:"ca:76:2d:41:1a:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:14.243527 containerd[2021]: 2026-04-17 23:36:14.222 [INFO][5378] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0" Namespace="calico-system" Pod="calico-kube-controllers-76fbf6d5cb-ghhvr" WorkloadEndpoint="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:36:14.295999 systemd-networkd[1925]: cali24d586fe50b: Link UP Apr 17 23:36:14.296463 systemd-networkd[1925]: cali24d586fe50b: Gained carrier Apr 17 23:36:14.316364 systemd[1]: run-netns-cni\x2d4a200ac4\x2d34be\x2dbb9e\x2dc9e4\x2d8fe481158a1f.mount: Deactivated successfully. Apr 17 23:36:14.366468 systemd[1]: Started cri-containerd-848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a.scope - libcontainer container 848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a. Apr 17 23:36:14.390878 containerd[2021]: time="2026-04-17T23:36:14.389861153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:14.390878 containerd[2021]: time="2026-04-17T23:36:14.389968049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:14.390878 containerd[2021]: time="2026-04-17T23:36:14.389995325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:14.390878 containerd[2021]: time="2026-04-17T23:36:14.390162953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:13.729 [INFO][5392] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0 coredns-674b8bbfcf- kube-system c31bfe7e-863f-41d7-b162-ae069a76ee07 1069 0 2026-04-17 23:35:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-239 coredns-674b8bbfcf-lghm6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24d586fe50b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:13.733 [INFO][5392] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:13.875 [INFO][5423] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" HandleID="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:13.924 [INFO][5423] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" HandleID="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000212a30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-239", "pod":"coredns-674b8bbfcf-lghm6", "timestamp":"2026-04-17 23:36:13.875734854 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000454420)} Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:13.925 [INFO][5423] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.108 [INFO][5423] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.108 [INFO][5423] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.118 [INFO][5423] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.139 [INFO][5423] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.161 [INFO][5423] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.174 [INFO][5423] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.186 [INFO][5423] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.186 [INFO][5423] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.193 [INFO][5423] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6 Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.222 [INFO][5423] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.252 [INFO][5423] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.69/26] block=192.168.35.64/26 handle="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.253 [INFO][5423] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.69/26] handle="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" host="ip-172-31-27-239" Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.253 [INFO][5423] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:14.456620 containerd[2021]: 2026-04-17 23:36:14.253 [INFO][5423] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.69/26] IPv6=[] ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" HandleID="k8s-pod-network.bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:14.457749 containerd[2021]: 2026-04-17 23:36:14.267 [INFO][5392] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31bfe7e-863f-41d7-b162-ae069a76ee07", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"coredns-674b8bbfcf-lghm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24d586fe50b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:14.457749 containerd[2021]: 2026-04-17 23:36:14.267 [INFO][5392] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.69/32] ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:14.457749 containerd[2021]: 2026-04-17 23:36:14.267 [INFO][5392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24d586fe50b ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:14.457749 containerd[2021]: 2026-04-17 23:36:14.313 [INFO][5392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:14.457749 containerd[2021]: 2026-04-17 23:36:14.336 [INFO][5392] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31bfe7e-863f-41d7-b162-ae069a76ee07", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6", Pod:"coredns-674b8bbfcf-lghm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24d586fe50b", MAC:"ce:e3:40:dd:b2:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:14.457749 containerd[2021]: 2026-04-17 23:36:14.441 [INFO][5392] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6" Namespace="kube-system" Pod="coredns-674b8bbfcf-lghm6" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:36:14.486155 systemd[1]: Started cri-containerd-5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0.scope - libcontainer container 5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0. Apr 17 23:36:14.524317 containerd[2021]: time="2026-04-17T23:36:14.524150874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:14.525070 containerd[2021]: time="2026-04-17T23:36:14.524650722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:14.525070 containerd[2021]: time="2026-04-17T23:36:14.524723766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:14.525430 containerd[2021]: time="2026-04-17T23:36:14.525350634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:14.615201 systemd[1]: Started cri-containerd-bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6.scope - libcontainer container bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6. Apr 17 23:36:14.687115 containerd[2021]: time="2026-04-17T23:36:14.687039342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-bcpkg,Uid:20e438e6-a5f5-45d7-b808-1a4fb95924d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3\"" Apr 17 23:36:14.695435 containerd[2021]: time="2026-04-17T23:36:14.695369070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:36:14.789805 containerd[2021]: time="2026-04-17T23:36:14.789710263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lghm6,Uid:c31bfe7e-863f-41d7-b162-ae069a76ee07,Namespace:kube-system,Attempt:1,} returns sandbox id \"bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6\"" Apr 17 23:36:14.814011 containerd[2021]: time="2026-04-17T23:36:14.813761887Z" level=info msg="CreateContainer within sandbox \"bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:36:14.869333 containerd[2021]: time="2026-04-17T23:36:14.869216539Z" level=info msg="CreateContainer within sandbox \"bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a97e0ec8b16ce1b42197ddf68bc7a2267028ffaa1a0228ad64193db761c0cfdf\"" Apr 17 23:36:14.871826 containerd[2021]: time="2026-04-17T23:36:14.871136827Z" level=info msg="StartContainer for \"a97e0ec8b16ce1b42197ddf68bc7a2267028ffaa1a0228ad64193db761c0cfdf\"" Apr 17 23:36:14.897817 containerd[2021]: time="2026-04-17T23:36:14.895689139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574894c46f-77jgd,Uid:cb8fd53d-17b0-4864-975d-aee521739f5b,Namespace:calico-system,Attempt:1,} returns sandbox id \"848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a\"" Apr 17 23:36:14.989944 containerd[2021]: time="2026-04-17T23:36:14.989113808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fbf6d5cb-ghhvr,Uid:406e705d-732b-4c0e-bff1-e277744b9161,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0\"" Apr 17 23:36:15.041301 systemd[1]: Started cri-containerd-a97e0ec8b16ce1b42197ddf68bc7a2267028ffaa1a0228ad64193db761c0cfdf.scope - libcontainer container a97e0ec8b16ce1b42197ddf68bc7a2267028ffaa1a0228ad64193db761c0cfdf. Apr 17 23:36:15.101987 containerd[2021]: time="2026-04-17T23:36:15.101873644Z" level=info msg="StartContainer for \"a97e0ec8b16ce1b42197ddf68bc7a2267028ffaa1a0228ad64193db761c0cfdf\" returns successfully" Apr 17 23:36:15.309989 systemd-networkd[1925]: calic8b6d65e34b: Gained IPv6LL Apr 17 23:36:15.310485 systemd-networkd[1925]: cali26c9cc77021: Gained IPv6LL Apr 17 23:36:15.503204 systemd-networkd[1925]: calif1c70744b6a: Gained IPv6LL Apr 17 23:36:15.827741 containerd[2021]: time="2026-04-17T23:36:15.826405076Z" level=info msg="StopPodSandbox for \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\"" Apr 17 23:36:15.949077 systemd-networkd[1925]: cali24d586fe50b: Gained IPv6LL Apr 17 23:36:15.954003 kubelet[3435]: I0417 23:36:15.953692 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lghm6" podStartSLOduration=66.953668221 podStartE2EDuration="1m6.953668221s" podCreationTimestamp="2026-04-17 23:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:15.558993631 +0000 UTC m=+72.125533140" watchObservedRunningTime="2026-04-17 23:36:15.953668221 +0000 UTC m=+72.520207670" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:15.953 [INFO][5707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:15.954 [INFO][5707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" iface="eth0" netns="/var/run/netns/cni-8a336fe1-4f8a-33bc-ab46-e906f26711a6" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:15.956 [INFO][5707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" iface="eth0" netns="/var/run/netns/cni-8a336fe1-4f8a-33bc-ab46-e906f26711a6" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:15.956 [INFO][5707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" iface="eth0" netns="/var/run/netns/cni-8a336fe1-4f8a-33bc-ab46-e906f26711a6" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:15.957 [INFO][5707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:15.957 [INFO][5707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:16.013 [INFO][5716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:16.015 [INFO][5716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:16.016 [INFO][5716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:16.030 [WARNING][5716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:16.031 [INFO][5716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:16.033 [INFO][5716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:16.041520 containerd[2021]: 2026-04-17 23:36:16.037 [INFO][5707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:36:16.045572 containerd[2021]: time="2026-04-17T23:36:16.042940877Z" level=info msg="TearDown network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\" successfully" Apr 17 23:36:16.045572 containerd[2021]: time="2026-04-17T23:36:16.042994193Z" level=info msg="StopPodSandbox for \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\" returns successfully" Apr 17 23:36:16.051962 containerd[2021]: time="2026-04-17T23:36:16.050367641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mh6rz,Uid:6421d27f-ddc6-4d22-86e3-4278c749f598,Namespace:kube-system,Attempt:1,}" Apr 17 23:36:16.054971 systemd[1]: run-netns-cni\x2d8a336fe1\x2d4f8a\x2d33bc\x2dab46\x2de906f26711a6.mount: Deactivated successfully. Apr 17 23:36:16.392980 systemd-networkd[1925]: cali65a365fcedd: Link UP Apr 17 23:36:16.395706 systemd-networkd[1925]: cali65a365fcedd: Gained carrier Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.211 [INFO][5733] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0 coredns-674b8bbfcf- kube-system 6421d27f-ddc6-4d22-86e3-4278c749f598 1108 0 2026-04-17 23:35:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-239 coredns-674b8bbfcf-mh6rz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali65a365fcedd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.211 [INFO][5733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.274 [INFO][5747] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" HandleID="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.296 [INFO][5747] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" HandleID="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-239", "pod":"coredns-674b8bbfcf-mh6rz", "timestamp":"2026-04-17 23:36:16.274219158 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.297 [INFO][5747] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.297 [INFO][5747] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.297 [INFO][5747] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.301 [INFO][5747] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.323 [INFO][5747] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.334 [INFO][5747] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.339 [INFO][5747] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.346 [INFO][5747] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.347 [INFO][5747] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.351 [INFO][5747] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.362 [INFO][5747] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.376 [INFO][5747] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.70/26] block=192.168.35.64/26 handle="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.376 [INFO][5747] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.70/26] handle="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" host="ip-172-31-27-239" Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.376 [INFO][5747] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:16.469636 containerd[2021]: 2026-04-17 23:36:16.377 [INFO][5747] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.70/26] IPv6=[] ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" HandleID="k8s-pod-network.6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.472683 containerd[2021]: 2026-04-17 23:36:16.385 [INFO][5733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6421d27f-ddc6-4d22-86e3-4278c749f598", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"coredns-674b8bbfcf-mh6rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65a365fcedd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:16.472683 containerd[2021]: 2026-04-17 23:36:16.385 [INFO][5733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.70/32] ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.472683 containerd[2021]: 2026-04-17 23:36:16.385 [INFO][5733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65a365fcedd ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.472683 containerd[2021]: 2026-04-17 23:36:16.399 [INFO][5733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.472683 containerd[2021]: 2026-04-17 23:36:16.403 [INFO][5733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6421d27f-ddc6-4d22-86e3-4278c749f598", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade", Pod:"coredns-674b8bbfcf-mh6rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65a365fcedd", MAC:"16:e5:d0:56:c7:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:16.472683 containerd[2021]: 2026-04-17 23:36:16.454 [INFO][5733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade" Namespace="kube-system" Pod="coredns-674b8bbfcf-mh6rz" WorkloadEndpoint="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:36:16.569068 containerd[2021]: time="2026-04-17T23:36:16.557420420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:16.569068 containerd[2021]: time="2026-04-17T23:36:16.557562080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:16.569068 containerd[2021]: time="2026-04-17T23:36:16.557601464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:16.569068 containerd[2021]: time="2026-04-17T23:36:16.558199100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:16.658160 systemd[1]: Started cri-containerd-6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade.scope - libcontainer container 6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade. Apr 17 23:36:16.830462 containerd[2021]: time="2026-04-17T23:36:16.829771269Z" level=info msg="StopPodSandbox for \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\"" Apr 17 23:36:16.839845 containerd[2021]: time="2026-04-17T23:36:16.836717145Z" level=info msg="StopPodSandbox for \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\"" Apr 17 23:36:16.859211 containerd[2021]: time="2026-04-17T23:36:16.859146681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mh6rz,Uid:6421d27f-ddc6-4d22-86e3-4278c749f598,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade\"" Apr 17 23:36:16.975758 containerd[2021]: time="2026-04-17T23:36:16.974780806Z" level=info msg="CreateContainer within sandbox \"6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:16.975 [INFO][5838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:16.975 [INFO][5838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" iface="eth0" netns="/var/run/netns/cni-b47e6c98-5f79-fc5a-dce0-2f476268e470" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:16.977 [INFO][5838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" iface="eth0" netns="/var/run/netns/cni-b47e6c98-5f79-fc5a-dce0-2f476268e470" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:16.998 [INFO][5838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" iface="eth0" netns="/var/run/netns/cni-b47e6c98-5f79-fc5a-dce0-2f476268e470" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:16.998 [INFO][5838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:16.999 [INFO][5838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:17.086 [INFO][5848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:17.087 [INFO][5848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:17.087 [INFO][5848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:17.117 [WARNING][5848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:17.117 [INFO][5848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:17.122 [INFO][5848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:17.157861 containerd[2021]: 2026-04-17 23:36:17.129 [INFO][5838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:36:17.163832 containerd[2021]: time="2026-04-17T23:36:17.158465743Z" level=info msg="TearDown network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\" successfully" Apr 17 23:36:17.163832 containerd[2021]: time="2026-04-17T23:36:17.158509423Z" level=info msg="StopPodSandbox for \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\" returns successfully" Apr 17 23:36:17.165973 containerd[2021]: time="2026-04-17T23:36:17.165240175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gf6r8,Uid:28f397bf-652b-49aa-8829-d5327f553244,Namespace:calico-system,Attempt:1,}" Apr 17 23:36:17.169272 containerd[2021]: time="2026-04-17T23:36:17.169080151Z" level=info msg="CreateContainer within sandbox \"6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c649679eb7185cf984ecd0ab3eda3828f68c801f8ba3ecc135a1c9aed57f08d\"" Apr 17 23:36:17.173726 containerd[2021]: time="2026-04-17T23:36:17.170258791Z" level=info msg="StartContainer for \"1c649679eb7185cf984ecd0ab3eda3828f68c801f8ba3ecc135a1c9aed57f08d\"" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.062 [INFO][5836] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.063 [INFO][5836] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" iface="eth0" netns="/var/run/netns/cni-7364fa96-949a-42d6-daf0-1fa49fd49381" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.064 [INFO][5836] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" iface="eth0" netns="/var/run/netns/cni-7364fa96-949a-42d6-daf0-1fa49fd49381" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.064 [INFO][5836] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" iface="eth0" netns="/var/run/netns/cni-7364fa96-949a-42d6-daf0-1fa49fd49381" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.065 [INFO][5836] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.065 [INFO][5836] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.196 [INFO][5855] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.196 [INFO][5855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.196 [INFO][5855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.219 [WARNING][5855] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.219 [INFO][5855] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.224 [INFO][5855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:17.243989 containerd[2021]: 2026-04-17 23:36:17.234 [INFO][5836] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:36:17.247115 containerd[2021]: time="2026-04-17T23:36:17.246415783Z" level=info msg="TearDown network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\" successfully" Apr 17 23:36:17.247115 containerd[2021]: time="2026-04-17T23:36:17.246469171Z" level=info msg="StopPodSandbox for \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\" returns successfully" Apr 17 23:36:17.251509 containerd[2021]: time="2026-04-17T23:36:17.250977751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gq6hp,Uid:ba48e602-86c6-47be-a12c-378408003d1d,Namespace:calico-system,Attempt:1,}" Apr 17 23:36:17.310158 systemd[1]: Started cri-containerd-1c649679eb7185cf984ecd0ab3eda3828f68c801f8ba3ecc135a1c9aed57f08d.scope - libcontainer container 1c649679eb7185cf984ecd0ab3eda3828f68c801f8ba3ecc135a1c9aed57f08d. Apr 17 23:36:17.435468 containerd[2021]: time="2026-04-17T23:36:17.434563976Z" level=info msg="StartContainer for \"1c649679eb7185cf984ecd0ab3eda3828f68c801f8ba3ecc135a1c9aed57f08d\" returns successfully" Apr 17 23:36:17.599763 systemd[1]: run-netns-cni\x2db47e6c98\x2d5f79\x2dfc5a\x2ddce0\x2d2f476268e470.mount: Deactivated successfully. Apr 17 23:36:17.600367 systemd[1]: run-netns-cni\x2d7364fa96\x2d949a\x2d42d6\x2ddaf0\x2d1fa49fd49381.mount: Deactivated successfully. Apr 17 23:36:17.855051 systemd-networkd[1925]: cali60a2fdcb9e3: Link UP Apr 17 23:36:17.860759 systemd-networkd[1925]: cali60a2fdcb9e3: Gained carrier Apr 17 23:36:17.910117 kubelet[3435]: I0417 23:36:17.909716 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mh6rz" podStartSLOduration=68.909693982 podStartE2EDuration="1m8.909693982s" podCreationTimestamp="2026-04-17 23:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:17.614368353 +0000 UTC m=+74.180908078" watchObservedRunningTime="2026-04-17 23:36:17.909693982 +0000 UTC m=+74.476233419" Apr 17 23:36:17.933234 systemd-networkd[1925]: cali65a365fcedd: Gained IPv6LL Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.407 [INFO][5872] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0 goldmane-5b85766d88- calico-system 28f397bf-652b-49aa-8829-d5327f553244 1127 0 2026-04-17 23:35:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-27-239 goldmane-5b85766d88-gf6r8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali60a2fdcb9e3 [] [] }} ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.409 [INFO][5872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.686 [INFO][5925] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" HandleID="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.734 [INFO][5925] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" HandleID="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ba150), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-239", "pod":"goldmane-5b85766d88-gf6r8", "timestamp":"2026-04-17 23:36:17.685512693 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003962c0)} Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.734 [INFO][5925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.734 [INFO][5925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.734 [INFO][5925] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.740 [INFO][5925] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.757 [INFO][5925] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.775 [INFO][5925] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.784 [INFO][5925] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.796 [INFO][5925] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.796 [INFO][5925] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.800 [INFO][5925] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.812 [INFO][5925] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.832 [INFO][5925] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.71/26] block=192.168.35.64/26 handle="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.832 [INFO][5925] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.71/26] handle="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" host="ip-172-31-27-239" Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.832 [INFO][5925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:17.941696 containerd[2021]: 2026-04-17 23:36:17.832 [INFO][5925] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.71/26] IPv6=[] ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" HandleID="k8s-pod-network.bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.944396 containerd[2021]: 2026-04-17 23:36:17.842 [INFO][5872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28f397bf-652b-49aa-8829-d5327f553244", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"goldmane-5b85766d88-gf6r8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60a2fdcb9e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:17.944396 containerd[2021]: 2026-04-17 23:36:17.844 [INFO][5872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.71/32] ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.944396 containerd[2021]: 2026-04-17 23:36:17.845 [INFO][5872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60a2fdcb9e3 ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.944396 containerd[2021]: 2026-04-17 23:36:17.855 [INFO][5872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:17.944396 containerd[2021]: 2026-04-17 23:36:17.865 [INFO][5872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28f397bf-652b-49aa-8829-d5327f553244", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de", Pod:"goldmane-5b85766d88-gf6r8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60a2fdcb9e3", MAC:"72:b4:50:91:21:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:17.944396 containerd[2021]: 2026-04-17 23:36:17.915 [INFO][5872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de" Namespace="calico-system" Pod="goldmane-5b85766d88-gf6r8" WorkloadEndpoint="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:36:18.022729 systemd-networkd[1925]: calicda6542d937: Link UP Apr 17 23:36:18.028044 systemd-networkd[1925]: calicda6542d937: Gained carrier Apr 17 23:36:18.052357 containerd[2021]: time="2026-04-17T23:36:18.052115947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:18.052357 containerd[2021]: time="2026-04-17T23:36:18.052205107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:18.052357 containerd[2021]: time="2026-04-17T23:36:18.052230943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:18.053110 containerd[2021]: time="2026-04-17T23:36:18.052937059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.506 [INFO][5894] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0 csi-node-driver- calico-system ba48e602-86c6-47be-a12c-378408003d1d 1128 0 2026-04-17 23:35:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-27-239 csi-node-driver-gq6hp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicda6542d937 [] [] }} ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.506 [INFO][5894] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.695 [INFO][5935] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" HandleID="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.748 [INFO][5935] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" HandleID="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cc1c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-239", "pod":"csi-node-driver-gq6hp", "timestamp":"2026-04-17 23:36:17.695261829 +0000 UTC"}, Hostname:"ip-172-31-27-239", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.749 [INFO][5935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.833 [INFO][5935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.836 [INFO][5935] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-239' Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.852 [INFO][5935] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.883 [INFO][5935] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.927 [INFO][5935] ipam/ipam.go 526: Trying affinity for 192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.937 [INFO][5935] ipam/ipam.go 160: Attempting to load block cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.953 [INFO][5935] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.35.64/26 host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.953 [INFO][5935] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.35.64/26 handle="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.962 [INFO][5935] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776 Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.972 [INFO][5935] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.35.64/26 handle="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.996 [INFO][5935] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.35.72/26] block=192.168.35.64/26 handle="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.996 [INFO][5935] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.35.72/26] handle="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" host="ip-172-31-27-239" Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.997 [INFO][5935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:36:18.090902 containerd[2021]: 2026-04-17 23:36:17.997 [INFO][5935] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.35.72/26] IPv6=[] ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" HandleID="k8s-pod-network.ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:18.094301 containerd[2021]: 2026-04-17 23:36:18.010 [INFO][5894] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba48e602-86c6-47be-a12c-378408003d1d", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"", Pod:"csi-node-driver-gq6hp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicda6542d937", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:18.094301 containerd[2021]: 2026-04-17 23:36:18.011 [INFO][5894] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.72/32] ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:18.094301 containerd[2021]: 2026-04-17 23:36:18.011 [INFO][5894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicda6542d937 ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:18.094301 containerd[2021]: 2026-04-17 23:36:18.027 [INFO][5894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:18.094301 containerd[2021]: 2026-04-17 23:36:18.029 [INFO][5894] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba48e602-86c6-47be-a12c-378408003d1d", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776", Pod:"csi-node-driver-gq6hp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicda6542d937", MAC:"a2:31:7c:b1:5e:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:36:18.094301 containerd[2021]: 2026-04-17 23:36:18.078 [INFO][5894] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776" Namespace="calico-system" Pod="csi-node-driver-gq6hp" WorkloadEndpoint="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:36:18.140993 systemd[1]: Started cri-containerd-bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de.scope - libcontainer container bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de. Apr 17 23:36:18.229473 systemd[1]: Started sshd@8-172.31.27.239:22-4.175.71.9:48360.service - OpenSSH per-connection server daemon (4.175.71.9:48360). Apr 17 23:36:18.234831 containerd[2021]: time="2026-04-17T23:36:18.231523340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:18.234831 containerd[2021]: time="2026-04-17T23:36:18.231627056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:18.234831 containerd[2021]: time="2026-04-17T23:36:18.231671096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:18.234831 containerd[2021]: time="2026-04-17T23:36:18.231900392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:18.373945 containerd[2021]: time="2026-04-17T23:36:18.373869741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gf6r8,Uid:28f397bf-652b-49aa-8829-d5327f553244,Namespace:calico-system,Attempt:1,} returns sandbox id \"bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de\"" Apr 17 23:36:18.389178 systemd[1]: Started cri-containerd-ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776.scope - libcontainer container ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776. Apr 17 23:36:18.459127 containerd[2021]: time="2026-04-17T23:36:18.458968593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gq6hp,Uid:ba48e602-86c6-47be-a12c-378408003d1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776\"" Apr 17 23:36:18.957214 systemd-networkd[1925]: cali60a2fdcb9e3: Gained IPv6LL Apr 17 23:36:19.282916 sshd[6019]: Accepted publickey for core from 4.175.71.9 port 48360 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:19.292348 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:19.304664 systemd-logind[1998]: New session 9 of user core. Apr 17 23:36:19.312138 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:36:19.376501 containerd[2021]: time="2026-04-17T23:36:19.374663530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:19.376501 containerd[2021]: time="2026-04-17T23:36:19.376312990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 17 23:36:19.376501 containerd[2021]: time="2026-04-17T23:36:19.376425454Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:19.381124 containerd[2021]: time="2026-04-17T23:36:19.381044206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:19.383219 containerd[2021]: time="2026-04-17T23:36:19.383157058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 4.687720428s" Apr 17 23:36:19.383404 containerd[2021]: time="2026-04-17T23:36:19.383367118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:36:19.385382 containerd[2021]: time="2026-04-17T23:36:19.385331818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:36:19.391594 containerd[2021]: time="2026-04-17T23:36:19.391532434Z" level=info msg="CreateContainer within sandbox \"9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:36:19.418844 containerd[2021]: time="2026-04-17T23:36:19.413169754Z" level=info msg="CreateContainer within sandbox \"9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d08e409dd136524ea234113930cfd9f9add69afc96e8c5ca205da07224ed913\"" Apr 17 23:36:19.418844 containerd[2021]: time="2026-04-17T23:36:19.417911746Z" level=info msg="StartContainer for \"2d08e409dd136524ea234113930cfd9f9add69afc96e8c5ca205da07224ed913\"" Apr 17 23:36:19.422482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352806174.mount: Deactivated successfully. Apr 17 23:36:19.516265 systemd[1]: Started cri-containerd-2d08e409dd136524ea234113930cfd9f9add69afc96e8c5ca205da07224ed913.scope - libcontainer container 2d08e409dd136524ea234113930cfd9f9add69afc96e8c5ca205da07224ed913. Apr 17 23:36:19.590612 containerd[2021]: time="2026-04-17T23:36:19.589543259Z" level=info msg="StartContainer for \"2d08e409dd136524ea234113930cfd9f9add69afc96e8c5ca205da07224ed913\" returns successfully" Apr 17 23:36:19.729686 containerd[2021]: time="2026-04-17T23:36:19.729585827Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:19.730594 containerd[2021]: time="2026-04-17T23:36:19.730526879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:36:19.738988 containerd[2021]: time="2026-04-17T23:36:19.738731580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 351.588506ms" Apr 17 23:36:19.739355 containerd[2021]: time="2026-04-17T23:36:19.739312080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:36:19.742450 containerd[2021]: time="2026-04-17T23:36:19.742140108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:36:19.750891 containerd[2021]: time="2026-04-17T23:36:19.750750180Z" level=info msg="CreateContainer within sandbox \"848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:36:19.774367 containerd[2021]: time="2026-04-17T23:36:19.773186328Z" level=info msg="CreateContainer within sandbox \"848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"28b65f26a6609b32c4cf21f1cba26d581d540ff55b918fa1b6f60eaf9ab6def8\"" Apr 17 23:36:19.776350 containerd[2021]: time="2026-04-17T23:36:19.776267880Z" level=info msg="StartContainer for \"28b65f26a6609b32c4cf21f1cba26d581d540ff55b918fa1b6f60eaf9ab6def8\"" Apr 17 23:36:19.907136 systemd[1]: Started cri-containerd-28b65f26a6609b32c4cf21f1cba26d581d540ff55b918fa1b6f60eaf9ab6def8.scope - libcontainer container 28b65f26a6609b32c4cf21f1cba26d581d540ff55b918fa1b6f60eaf9ab6def8. Apr 17 23:36:20.046863 systemd-networkd[1925]: calicda6542d937: Gained IPv6LL Apr 17 23:36:20.072418 containerd[2021]: time="2026-04-17T23:36:20.072349173Z" level=info msg="StartContainer for \"28b65f26a6609b32c4cf21f1cba26d581d540ff55b918fa1b6f60eaf9ab6def8\" returns successfully" Apr 17 23:36:20.267556 sshd[6019]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:20.276317 systemd[1]: sshd@8-172.31.27.239:22-4.175.71.9:48360.service: Deactivated successfully. Apr 17 23:36:20.287720 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:36:20.298094 systemd-logind[1998]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:36:20.303235 systemd-logind[1998]: Removed session 9. Apr 17 23:36:20.608108 kubelet[3435]: I0417 23:36:20.605689 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-574894c46f-77jgd" podStartSLOduration=41.799348688 podStartE2EDuration="46.6056625s" podCreationTimestamp="2026-04-17 23:35:34 +0000 UTC" firstStartedPulling="2026-04-17 23:36:14.935516096 +0000 UTC m=+71.502055533" lastFinishedPulling="2026-04-17 23:36:19.741829824 +0000 UTC m=+76.308369345" observedRunningTime="2026-04-17 23:36:20.605223168 +0000 UTC m=+77.171762629" watchObservedRunningTime="2026-04-17 23:36:20.6056625 +0000 UTC m=+77.172201973" Apr 17 23:36:20.639413 kubelet[3435]: I0417 23:36:20.638100 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-574894c46f-bcpkg" podStartSLOduration=41.945784244 podStartE2EDuration="46.637949712s" podCreationTimestamp="2026-04-17 23:35:34 +0000 UTC" firstStartedPulling="2026-04-17 23:36:14.692893914 +0000 UTC m=+71.259433339" lastFinishedPulling="2026-04-17 23:36:19.38505937 +0000 UTC m=+75.951598807" observedRunningTime="2026-04-17 23:36:20.635645964 +0000 UTC m=+77.202185665" watchObservedRunningTime="2026-04-17 23:36:20.637949712 +0000 UTC m=+77.204489149" Apr 17 23:36:21.597296 kubelet[3435]: I0417 23:36:21.596711 3435 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:36:22.601136 kubelet[3435]: I0417 23:36:22.600199 3435 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:36:22.968308 ntpd[1991]: Listen normally on 11 calif1c70744b6a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:36:22.969424 ntpd[1991]: 17 Apr 23:36:22 ntpd[1991]: Listen normally on 11 calif1c70744b6a [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:36:22.969424 ntpd[1991]: 17 Apr 23:36:22 ntpd[1991]: Listen normally on 12 cali26c9cc77021 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:36:22.969424 ntpd[1991]: 17 Apr 23:36:22 ntpd[1991]: Listen normally on 13 calic8b6d65e34b [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:36:22.969424 ntpd[1991]: 17 Apr 23:36:22 ntpd[1991]: Listen normally on 14 cali24d586fe50b [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:36:22.969424 ntpd[1991]: 17 Apr 23:36:22 ntpd[1991]: Listen normally on 15 cali65a365fcedd [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:36:22.969424 ntpd[1991]: 17 Apr 23:36:22 ntpd[1991]: Listen normally on 16 cali60a2fdcb9e3 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:36:22.969424 ntpd[1991]: 17 Apr 23:36:22 ntpd[1991]: Listen normally on 17 calicda6542d937 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:36:22.968418 ntpd[1991]: Listen normally on 12 cali26c9cc77021 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:36:22.968488 ntpd[1991]: Listen normally on 13 calic8b6d65e34b [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:36:22.968557 ntpd[1991]: Listen normally on 14 cali24d586fe50b [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:36:22.968630 ntpd[1991]: Listen normally on 15 cali65a365fcedd [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:36:22.968724 ntpd[1991]: Listen normally on 16 cali60a2fdcb9e3 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:36:22.968826 ntpd[1991]: Listen normally on 17 calicda6542d937 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:36:25.447434 systemd[1]: Started sshd@9-172.31.27.239:22-4.175.71.9:33826.service - OpenSSH per-connection server daemon (4.175.71.9:33826). Apr 17 23:36:26.485557 sshd[6186]: Accepted publickey for core from 4.175.71.9 port 33826 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:26.489884 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:26.504966 systemd-logind[1998]: New session 10 of user core. Apr 17 23:36:26.512181 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:36:26.660886 containerd[2021]: time="2026-04-17T23:36:26.659215278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:26.660886 containerd[2021]: time="2026-04-17T23:36:26.660851190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 17 23:36:26.662041 containerd[2021]: time="2026-04-17T23:36:26.661908750Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:26.666906 containerd[2021]: time="2026-04-17T23:36:26.666850002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:26.668737 containerd[2021]: time="2026-04-17T23:36:26.668686698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 6.926488414s" Apr 17 23:36:26.668970 containerd[2021]: time="2026-04-17T23:36:26.668937930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 17 23:36:26.671191 containerd[2021]: time="2026-04-17T23:36:26.671145702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:36:26.714596 containerd[2021]: time="2026-04-17T23:36:26.714529830Z" level=info msg="CreateContainer within sandbox \"5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:36:26.740906 containerd[2021]: time="2026-04-17T23:36:26.739746894Z" level=info msg="CreateContainer within sandbox \"5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1657fcaad7805ba62f381445db94590c7f13d67cd8dee1e3cd603c38a9d9f009\"" Apr 17 23:36:26.742623 containerd[2021]: time="2026-04-17T23:36:26.742470330Z" level=info msg="StartContainer for \"1657fcaad7805ba62f381445db94590c7f13d67cd8dee1e3cd603c38a9d9f009\"" Apr 17 23:36:26.803840 systemd[1]: Started cri-containerd-1657fcaad7805ba62f381445db94590c7f13d67cd8dee1e3cd603c38a9d9f009.scope - libcontainer container 1657fcaad7805ba62f381445db94590c7f13d67cd8dee1e3cd603c38a9d9f009. Apr 17 23:36:26.905096 containerd[2021]: time="2026-04-17T23:36:26.905029927Z" level=info msg="StartContainer for \"1657fcaad7805ba62f381445db94590c7f13d67cd8dee1e3cd603c38a9d9f009\" returns successfully" Apr 17 23:36:27.425769 sshd[6186]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:27.432619 systemd[1]: sshd@9-172.31.27.239:22-4.175.71.9:33826.service: Deactivated successfully. Apr 17 23:36:27.436972 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:36:27.438655 systemd-logind[1998]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:36:27.440634 systemd-logind[1998]: Removed session 10. Apr 17 23:36:27.750897 kubelet[3435]: I0417 23:36:27.748773 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76fbf6d5cb-ghhvr" podStartSLOduration=36.074699033 podStartE2EDuration="47.748749523s" podCreationTimestamp="2026-04-17 23:35:40 +0000 UTC" firstStartedPulling="2026-04-17 23:36:14.996516572 +0000 UTC m=+71.563055997" lastFinishedPulling="2026-04-17 23:36:26.670566966 +0000 UTC m=+83.237106487" observedRunningTime="2026-04-17 23:36:27.654309127 +0000 UTC m=+84.220848576" watchObservedRunningTime="2026-04-17 23:36:27.748749523 +0000 UTC m=+84.315288948" Apr 17 23:36:29.293708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718476456.mount: Deactivated successfully. Apr 17 23:36:29.992834 containerd[2021]: time="2026-04-17T23:36:29.992730754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 17 23:36:29.996851 containerd[2021]: time="2026-04-17T23:36:29.996229042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:30.000949 containerd[2021]: time="2026-04-17T23:36:30.000888426Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:30.002856 containerd[2021]: time="2026-04-17T23:36:30.002740710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:30.005722 containerd[2021]: time="2026-04-17T23:36:30.005379115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.332027681s" Apr 17 23:36:30.005722 containerd[2021]: time="2026-04-17T23:36:30.005450695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 17 23:36:30.012187 containerd[2021]: time="2026-04-17T23:36:30.011980219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:36:30.019014 containerd[2021]: time="2026-04-17T23:36:30.018940267Z" level=info msg="CreateContainer within sandbox \"bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:36:30.047271 containerd[2021]: time="2026-04-17T23:36:30.047192167Z" level=info msg="CreateContainer within sandbox \"bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c166c2b8e51f25d02633d3e3b33a68a5de00d98ec6daf5a36da63b332c83e6f5\"" Apr 17 23:36:30.049011 containerd[2021]: time="2026-04-17T23:36:30.048939643Z" level=info msg="StartContainer for \"c166c2b8e51f25d02633d3e3b33a68a5de00d98ec6daf5a36da63b332c83e6f5\"" Apr 17 23:36:30.146531 systemd[1]: Started cri-containerd-c166c2b8e51f25d02633d3e3b33a68a5de00d98ec6daf5a36da63b332c83e6f5.scope - libcontainer container c166c2b8e51f25d02633d3e3b33a68a5de00d98ec6daf5a36da63b332c83e6f5. Apr 17 23:36:30.260068 containerd[2021]: time="2026-04-17T23:36:30.258862688Z" level=info msg="StartContainer for \"c166c2b8e51f25d02633d3e3b33a68a5de00d98ec6daf5a36da63b332c83e6f5\" returns successfully" Apr 17 23:36:30.665930 kubelet[3435]: I0417 23:36:30.665008 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-gf6r8" podStartSLOduration=45.03584534 podStartE2EDuration="56.664977358s" podCreationTimestamp="2026-04-17 23:35:34 +0000 UTC" firstStartedPulling="2026-04-17 23:36:18.380285769 +0000 UTC m=+74.946825194" lastFinishedPulling="2026-04-17 23:36:30.009417775 +0000 UTC m=+86.575957212" observedRunningTime="2026-04-17 23:36:30.661923598 +0000 UTC m=+87.228463047" watchObservedRunningTime="2026-04-17 23:36:30.664977358 +0000 UTC m=+87.231516795" Apr 17 23:36:30.707960 systemd[1]: run-containerd-runc-k8s.io-c166c2b8e51f25d02633d3e3b33a68a5de00d98ec6daf5a36da63b332c83e6f5-runc.N5l5oq.mount: Deactivated successfully. Apr 17 23:36:31.582476 containerd[2021]: time="2026-04-17T23:36:31.580819750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.583572 containerd[2021]: time="2026-04-17T23:36:31.583522726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 17 23:36:31.585045 containerd[2021]: time="2026-04-17T23:36:31.584998990Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.589379 containerd[2021]: time="2026-04-17T23:36:31.589310986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:31.591435 containerd[2021]: time="2026-04-17T23:36:31.591366562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.578013099s" Apr 17 23:36:31.591435 containerd[2021]: time="2026-04-17T23:36:31.591429082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 17 23:36:31.601182 containerd[2021]: time="2026-04-17T23:36:31.601091962Z" level=info msg="CreateContainer within sandbox \"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:36:31.632200 containerd[2021]: time="2026-04-17T23:36:31.632123807Z" level=info msg="CreateContainer within sandbox \"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3f8f27c98ebc037c57bf925c556cd88d6879e16eba0e33ba2ee0d89695371a0b\"" Apr 17 23:36:31.633688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563547513.mount: Deactivated successfully. Apr 17 23:36:31.637656 containerd[2021]: time="2026-04-17T23:36:31.635908727Z" level=info msg="StartContainer for \"3f8f27c98ebc037c57bf925c556cd88d6879e16eba0e33ba2ee0d89695371a0b\"" Apr 17 23:36:31.708156 systemd[1]: Started cri-containerd-3f8f27c98ebc037c57bf925c556cd88d6879e16eba0e33ba2ee0d89695371a0b.scope - libcontainer container 3f8f27c98ebc037c57bf925c556cd88d6879e16eba0e33ba2ee0d89695371a0b. Apr 17 23:36:31.781705 containerd[2021]: time="2026-04-17T23:36:31.777291443Z" level=info msg="StartContainer for \"3f8f27c98ebc037c57bf925c556cd88d6879e16eba0e33ba2ee0d89695371a0b\" returns successfully" Apr 17 23:36:31.782776 containerd[2021]: time="2026-04-17T23:36:31.782707067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:36:32.626317 systemd[1]: Started sshd@10-172.31.27.239:22-4.175.71.9:33838.service - OpenSSH per-connection server daemon (4.175.71.9:33838). Apr 17 23:36:33.703372 sshd[6438]: Accepted publickey for core from 4.175.71.9 port 33838 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:33.707572 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:33.714380 containerd[2021]: time="2026-04-17T23:36:33.714298945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:33.718433 containerd[2021]: time="2026-04-17T23:36:33.717854185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 17 23:36:33.718433 containerd[2021]: time="2026-04-17T23:36:33.718085557Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:33.719538 systemd-logind[1998]: New session 11 of user core. Apr 17 23:36:33.728139 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:36:33.739087 containerd[2021]: time="2026-04-17T23:36:33.739024813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:36:33.748505 containerd[2021]: time="2026-04-17T23:36:33.748436761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.965659074s" Apr 17 23:36:33.748841 containerd[2021]: time="2026-04-17T23:36:33.748710229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 17 23:36:33.756522 containerd[2021]: time="2026-04-17T23:36:33.756328633Z" level=info msg="CreateContainer within sandbox \"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:36:33.780680 containerd[2021]: time="2026-04-17T23:36:33.780247741Z" level=info msg="CreateContainer within sandbox \"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"124d34cef7d3b6e5cb57864c87202e4d8f88ead6a57ce60e6e508bf0b8acad2a\"" Apr 17 23:36:33.789146 containerd[2021]: time="2026-04-17T23:36:33.786093301Z" level=info msg="StartContainer for \"124d34cef7d3b6e5cb57864c87202e4d8f88ead6a57ce60e6e508bf0b8acad2a\"" Apr 17 23:36:33.868331 systemd[1]: Started cri-containerd-124d34cef7d3b6e5cb57864c87202e4d8f88ead6a57ce60e6e508bf0b8acad2a.scope - libcontainer container 124d34cef7d3b6e5cb57864c87202e4d8f88ead6a57ce60e6e508bf0b8acad2a. Apr 17 23:36:33.945362 containerd[2021]: time="2026-04-17T23:36:33.945261782Z" level=info msg="StartContainer for \"124d34cef7d3b6e5cb57864c87202e4d8f88ead6a57ce60e6e508bf0b8acad2a\" returns successfully" Apr 17 23:36:34.071313 kubelet[3435]: I0417 23:36:34.071014 3435 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:36:34.071313 kubelet[3435]: I0417 23:36:34.071082 3435 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:36:34.546307 sshd[6438]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:34.555165 systemd[1]: sshd@10-172.31.27.239:22-4.175.71.9:33838.service: Deactivated successfully. Apr 17 23:36:34.563261 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:36:34.566967 systemd-logind[1998]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:36:34.569139 systemd-logind[1998]: Removed session 11. Apr 17 23:36:34.745731 systemd[1]: Started sshd@11-172.31.27.239:22-4.175.71.9:33848.service - OpenSSH per-connection server daemon (4.175.71.9:33848). Apr 17 23:36:35.783201 sshd[6496]: Accepted publickey for core from 4.175.71.9 port 33848 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:35.787634 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:35.798149 systemd-logind[1998]: New session 12 of user core. Apr 17 23:36:35.805119 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:36:36.729292 sshd[6496]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:36.737593 systemd[1]: sshd@11-172.31.27.239:22-4.175.71.9:33848.service: Deactivated successfully. Apr 17 23:36:36.742267 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:36:36.745324 systemd-logind[1998]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:36:36.748242 systemd-logind[1998]: Removed session 12. Apr 17 23:36:36.901242 systemd[1]: Started sshd@12-172.31.27.239:22-4.175.71.9:44046.service - OpenSSH per-connection server daemon (4.175.71.9:44046). Apr 17 23:36:36.993316 kubelet[3435]: I0417 23:36:36.993064 3435 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:36:37.035620 kubelet[3435]: I0417 23:36:37.035517 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gq6hp" podStartSLOduration=42.750505597 podStartE2EDuration="58.035495017s" podCreationTimestamp="2026-04-17 23:35:39 +0000 UTC" firstStartedPulling="2026-04-17 23:36:18.464939169 +0000 UTC m=+75.031478606" lastFinishedPulling="2026-04-17 23:36:33.749928601 +0000 UTC m=+90.316468026" observedRunningTime="2026-04-17 23:36:34.687079838 +0000 UTC m=+91.253619299" watchObservedRunningTime="2026-04-17 23:36:37.035495017 +0000 UTC m=+93.602034442" Apr 17 23:36:37.917756 sshd[6508]: Accepted publickey for core from 4.175.71.9 port 44046 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:37.920591 sshd[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:37.930319 systemd-logind[1998]: New session 13 of user core. Apr 17 23:36:37.939130 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:36:38.731524 sshd[6508]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:38.741972 systemd[1]: sshd@12-172.31.27.239:22-4.175.71.9:44046.service: Deactivated successfully. Apr 17 23:36:38.747774 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:36:38.751160 systemd-logind[1998]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:36:38.754436 systemd-logind[1998]: Removed session 13. Apr 17 23:36:43.913457 systemd[1]: Started sshd@13-172.31.27.239:22-4.175.71.9:44048.service - OpenSSH per-connection server daemon (4.175.71.9:44048). Apr 17 23:36:44.913845 sshd[6534]: Accepted publickey for core from 4.175.71.9 port 44048 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:44.916738 sshd[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:44.926094 systemd-logind[1998]: New session 14 of user core. Apr 17 23:36:44.932145 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:36:45.735469 sshd[6534]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:45.743132 systemd[1]: sshd@13-172.31.27.239:22-4.175.71.9:44048.service: Deactivated successfully. Apr 17 23:36:45.748874 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:36:45.750603 systemd-logind[1998]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:36:45.752935 systemd-logind[1998]: Removed session 14. Apr 17 23:36:45.921047 systemd[1]: Started sshd@14-172.31.27.239:22-4.175.71.9:48346.service - OpenSSH per-connection server daemon (4.175.71.9:48346). Apr 17 23:36:46.926596 sshd[6549]: Accepted publickey for core from 4.175.71.9 port 48346 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:46.929546 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:46.942633 systemd-logind[1998]: New session 15 of user core. Apr 17 23:36:46.950126 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:36:48.079821 sshd[6549]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:48.088675 systemd[1]: sshd@14-172.31.27.239:22-4.175.71.9:48346.service: Deactivated successfully. Apr 17 23:36:48.092599 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:36:48.095076 systemd-logind[1998]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:36:48.097284 systemd-logind[1998]: Removed session 15. Apr 17 23:36:48.267320 systemd[1]: Started sshd@15-172.31.27.239:22-4.175.71.9:48362.service - OpenSSH per-connection server daemon (4.175.71.9:48362). Apr 17 23:36:49.296835 sshd[6570]: Accepted publickey for core from 4.175.71.9 port 48362 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:49.298779 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:49.306593 systemd-logind[1998]: New session 16 of user core. Apr 17 23:36:49.313083 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:36:50.952368 sshd[6570]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:50.960427 systemd-logind[1998]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:36:50.962621 systemd[1]: sshd@15-172.31.27.239:22-4.175.71.9:48362.service: Deactivated successfully. Apr 17 23:36:50.969308 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:36:50.972089 systemd-logind[1998]: Removed session 16. Apr 17 23:36:51.138428 systemd[1]: Started sshd@16-172.31.27.239:22-4.175.71.9:48374.service - OpenSSH per-connection server daemon (4.175.71.9:48374). Apr 17 23:36:52.178688 sshd[6596]: Accepted publickey for core from 4.175.71.9 port 48374 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:52.181855 sshd[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:52.191435 systemd-logind[1998]: New session 17 of user core. Apr 17 23:36:52.199068 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:36:53.280409 sshd[6596]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:53.289258 systemd-logind[1998]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:36:53.290106 systemd[1]: sshd@16-172.31.27.239:22-4.175.71.9:48374.service: Deactivated successfully. Apr 17 23:36:53.294879 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:36:53.298404 systemd-logind[1998]: Removed session 17. Apr 17 23:36:53.454324 systemd[1]: Started sshd@17-172.31.27.239:22-4.175.71.9:48376.service - OpenSSH per-connection server daemon (4.175.71.9:48376). Apr 17 23:36:54.461348 sshd[6607]: Accepted publickey for core from 4.175.71.9 port 48376 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:54.463524 sshd[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:54.471695 systemd-logind[1998]: New session 18 of user core. Apr 17 23:36:54.478090 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:36:55.290116 sshd[6607]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:55.303250 systemd[1]: sshd@17-172.31.27.239:22-4.175.71.9:48376.service: Deactivated successfully. Apr 17 23:36:55.308529 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:36:55.310904 systemd-logind[1998]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:36:55.313692 systemd-logind[1998]: Removed session 18. Apr 17 23:37:00.478384 systemd[1]: Started sshd@18-172.31.27.239:22-4.175.71.9:41384.service - OpenSSH per-connection server daemon (4.175.71.9:41384). Apr 17 23:37:01.510910 sshd[6668]: Accepted publickey for core from 4.175.71.9 port 41384 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:01.513648 sshd[6668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:01.521953 systemd-logind[1998]: New session 19 of user core. Apr 17 23:37:01.532107 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:37:02.340524 sshd[6668]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:02.347433 systemd[1]: sshd@18-172.31.27.239:22-4.175.71.9:41384.service: Deactivated successfully. Apr 17 23:37:02.351599 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:37:02.357340 systemd-logind[1998]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:37:02.359619 systemd-logind[1998]: Removed session 19. Apr 17 23:37:04.473908 containerd[2021]: time="2026-04-17T23:37:04.473131890Z" level=info msg="StopPodSandbox for \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\"" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.538 [WARNING][6731] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"cb8fd53d-17b0-4864-975d-aee521739f5b", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a", Pod:"calico-apiserver-574894c46f-77jgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali26c9cc77021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.538 [INFO][6731] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.538 [INFO][6731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" iface="eth0" netns="" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.539 [INFO][6731] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.539 [INFO][6731] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.587 [INFO][6738] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.587 [INFO][6738] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.587 [INFO][6738] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.602 [WARNING][6738] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.602 [INFO][6738] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.604 [INFO][6738] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:04.613499 containerd[2021]: 2026-04-17 23:37:04.607 [INFO][6731] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.613499 containerd[2021]: time="2026-04-17T23:37:04.613071066Z" level=info msg="TearDown network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\" successfully" Apr 17 23:37:04.613499 containerd[2021]: time="2026-04-17T23:37:04.613109310Z" level=info msg="StopPodSandbox for \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\" returns successfully" Apr 17 23:37:04.614440 containerd[2021]: time="2026-04-17T23:37:04.614017434Z" level=info msg="RemovePodSandbox for \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\"" Apr 17 23:37:04.614440 containerd[2021]: time="2026-04-17T23:37:04.614143914Z" level=info msg="Forcibly stopping sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\"" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.690 [WARNING][6752] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"cb8fd53d-17b0-4864-975d-aee521739f5b", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"848345a7494cc02acffcd17bbea74fda13c87f61ebf9aa38b928cb34d19b779a", Pod:"calico-apiserver-574894c46f-77jgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali26c9cc77021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.690 [INFO][6752] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.690 [INFO][6752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" iface="eth0" netns="" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.690 [INFO][6752] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.690 [INFO][6752] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.729 [INFO][6759] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.730 [INFO][6759] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.730 [INFO][6759] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.748 [WARNING][6759] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.748 [INFO][6759] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" HandleID="k8s-pod-network.abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--77jgd-eth0" Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.751 [INFO][6759] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:04.759145 containerd[2021]: 2026-04-17 23:37:04.754 [INFO][6752] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7" Apr 17 23:37:04.759145 containerd[2021]: time="2026-04-17T23:37:04.758323123Z" level=info msg="TearDown network for sandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\" successfully" Apr 17 23:37:04.767049 containerd[2021]: time="2026-04-17T23:37:04.766968475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:04.767199 containerd[2021]: time="2026-04-17T23:37:04.767081767Z" level=info msg="RemovePodSandbox \"abc76879c6a6976bf286be8841f5ab13184414ea13f9d5dc67112d53b26e8cc7\" returns successfully" Apr 17 23:37:04.767737 containerd[2021]: time="2026-04-17T23:37:04.767670907Z" level=info msg="StopPodSandbox for \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\"" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.834 [WARNING][6774] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0", GenerateName:"calico-kube-controllers-76fbf6d5cb-", Namespace:"calico-system", SelfLink:"", UID:"406e705d-732b-4c0e-bff1-e277744b9161", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fbf6d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0", Pod:"calico-kube-controllers-76fbf6d5cb-ghhvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8b6d65e34b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.834 [INFO][6774] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.834 [INFO][6774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" iface="eth0" netns="" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.834 [INFO][6774] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.834 [INFO][6774] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.874 [INFO][6782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.874 [INFO][6782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.874 [INFO][6782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.888 [WARNING][6782] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.888 [INFO][6782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.891 [INFO][6782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:04.897765 containerd[2021]: 2026-04-17 23:37:04.894 [INFO][6774] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:04.900144 containerd[2021]: time="2026-04-17T23:37:04.897852992Z" level=info msg="TearDown network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\" successfully" Apr 17 23:37:04.900144 containerd[2021]: time="2026-04-17T23:37:04.897891584Z" level=info msg="StopPodSandbox for \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\" returns successfully" Apr 17 23:37:04.902818 containerd[2021]: time="2026-04-17T23:37:04.901725380Z" level=info msg="RemovePodSandbox for \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\"" Apr 17 23:37:04.902818 containerd[2021]: time="2026-04-17T23:37:04.901879376Z" level=info msg="Forcibly stopping sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\"" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:04.978 [WARNING][6796] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0", GenerateName:"calico-kube-controllers-76fbf6d5cb-", Namespace:"calico-system", SelfLink:"", UID:"406e705d-732b-4c0e-bff1-e277744b9161", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fbf6d5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"5d147c581b7d3b7c7e1b4396ed18c1ca2692850c9325e5bb1bfb659d9ca599d0", Pod:"calico-kube-controllers-76fbf6d5cb-ghhvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8b6d65e34b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:04.978 [INFO][6796] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:04.978 [INFO][6796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" iface="eth0" netns="" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:04.978 [INFO][6796] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:04.978 [INFO][6796] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:05.019 [INFO][6803] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:05.019 [INFO][6803] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:05.019 [INFO][6803] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:05.032 [WARNING][6803] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:05.033 [INFO][6803] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" HandleID="k8s-pod-network.29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Workload="ip--172--31--27--239-k8s-calico--kube--controllers--76fbf6d5cb--ghhvr-eth0" Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:05.035 [INFO][6803] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:05.041946 containerd[2021]: 2026-04-17 23:37:05.038 [INFO][6796] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3" Apr 17 23:37:05.041946 containerd[2021]: time="2026-04-17T23:37:05.041650781Z" level=info msg="TearDown network for sandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\" successfully" Apr 17 23:37:05.048033 containerd[2021]: time="2026-04-17T23:37:05.047955641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:05.048188 containerd[2021]: time="2026-04-17T23:37:05.048128309Z" level=info msg="RemovePodSandbox \"29a0b8a4936a26be24eea11c0d6437c158780a9b71825aa2d84dc7069cbeb7e3\" returns successfully" Apr 17 23:37:05.048857 containerd[2021]: time="2026-04-17T23:37:05.048779345Z" level=info msg="StopPodSandbox for \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\"" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.140 [WARNING][6817] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31bfe7e-863f-41d7-b162-ae069a76ee07", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6", Pod:"coredns-674b8bbfcf-lghm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24d586fe50b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.141 [INFO][6817] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.141 [INFO][6817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" iface="eth0" netns="" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.141 [INFO][6817] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.141 [INFO][6817] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.181 [INFO][6824] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.184 [INFO][6824] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.184 [INFO][6824] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.209 [WARNING][6824] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.209 [INFO][6824] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.212 [INFO][6824] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:05.221706 containerd[2021]: 2026-04-17 23:37:05.218 [INFO][6817] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.222570 containerd[2021]: time="2026-04-17T23:37:05.221754941Z" level=info msg="TearDown network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\" successfully" Apr 17 23:37:05.222570 containerd[2021]: time="2026-04-17T23:37:05.221838761Z" level=info msg="StopPodSandbox for \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\" returns successfully" Apr 17 23:37:05.223236 containerd[2021]: time="2026-04-17T23:37:05.223172741Z" level=info msg="RemovePodSandbox for \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\"" Apr 17 23:37:05.223364 containerd[2021]: time="2026-04-17T23:37:05.223259477Z" level=info msg="Forcibly stopping sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\"" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.310 [WARNING][6838] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31bfe7e-863f-41d7-b162-ae069a76ee07", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"bfc61d9547e60ea8939f2f1d95b5b5b4fd5906b9c37ddb91fe892bc672205af6", Pod:"coredns-674b8bbfcf-lghm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24d586fe50b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.311 [INFO][6838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.311 [INFO][6838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" iface="eth0" netns="" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.311 [INFO][6838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.311 [INFO][6838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.376 [INFO][6846] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.376 [INFO][6846] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.377 [INFO][6846] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.401 [WARNING][6846] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.401 [INFO][6846] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" HandleID="k8s-pod-network.eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--lghm6-eth0" Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.408 [INFO][6846] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:05.431687 containerd[2021]: 2026-04-17 23:37:05.421 [INFO][6838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6" Apr 17 23:37:05.434384 containerd[2021]: time="2026-04-17T23:37:05.431721714Z" level=info msg="TearDown network for sandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\" successfully" Apr 17 23:37:05.448070 containerd[2021]: time="2026-04-17T23:37:05.447967351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:05.448289 containerd[2021]: time="2026-04-17T23:37:05.448083655Z" level=info msg="RemovePodSandbox \"eace142a645b3b255fdf48be353df0a00f6f6972bbd2e7c7636a732733062cb6\" returns successfully" Apr 17 23:37:05.452085 containerd[2021]: time="2026-04-17T23:37:05.452034895Z" level=info msg="StopPodSandbox for \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\"" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.543 [WARNING][6860] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28f397bf-652b-49aa-8829-d5327f553244", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de", Pod:"goldmane-5b85766d88-gf6r8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60a2fdcb9e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.543 [INFO][6860] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.543 [INFO][6860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" iface="eth0" netns="" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.543 [INFO][6860] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.543 [INFO][6860] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.592 [INFO][6867] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.592 [INFO][6867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.592 [INFO][6867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.614 [WARNING][6867] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.614 [INFO][6867] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.618 [INFO][6867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:05.626395 containerd[2021]: 2026-04-17 23:37:05.622 [INFO][6860] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.626395 containerd[2021]: time="2026-04-17T23:37:05.626204359Z" level=info msg="TearDown network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\" successfully" Apr 17 23:37:05.626395 containerd[2021]: time="2026-04-17T23:37:05.626244823Z" level=info msg="StopPodSandbox for \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\" returns successfully" Apr 17 23:37:05.629377 containerd[2021]: time="2026-04-17T23:37:05.627107875Z" level=info msg="RemovePodSandbox for \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\"" Apr 17 23:37:05.629377 containerd[2021]: time="2026-04-17T23:37:05.627160135Z" level=info msg="Forcibly stopping sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\"" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.733 [WARNING][6882] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"28f397bf-652b-49aa-8829-d5327f553244", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"bb647ad117ba33f70a5bc31147bac22b899c526f2b90347f4956affccd4435de", Pod:"goldmane-5b85766d88-gf6r8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60a2fdcb9e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.734 [INFO][6882] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.734 [INFO][6882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" iface="eth0" netns="" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.734 [INFO][6882] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.734 [INFO][6882] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.798 [INFO][6889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.798 [INFO][6889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.800 [INFO][6889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.817 [WARNING][6889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.818 [INFO][6889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" HandleID="k8s-pod-network.c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Workload="ip--172--31--27--239-k8s-goldmane--5b85766d88--gf6r8-eth0" Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.820 [INFO][6889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:05.834962 containerd[2021]: 2026-04-17 23:37:05.825 [INFO][6882] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b" Apr 17 23:37:05.834962 containerd[2021]: time="2026-04-17T23:37:05.834139616Z" level=info msg="TearDown network for sandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\" successfully" Apr 17 23:37:05.843051 containerd[2021]: time="2026-04-17T23:37:05.842971293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:05.843227 containerd[2021]: time="2026-04-17T23:37:05.843093801Z" level=info msg="RemovePodSandbox \"c181926f1e548df8fdb5d1cf64cf3f73b2fc757cbbbc29fe55297a5796ab068b\" returns successfully" Apr 17 23:37:05.844011 containerd[2021]: time="2026-04-17T23:37:05.843938457Z" level=info msg="StopPodSandbox for \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\"" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.930 [WARNING][6903] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"20e438e6-a5f5-45d7-b808-1a4fb95924d1", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3", Pod:"calico-apiserver-574894c46f-bcpkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif1c70744b6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.931 [INFO][6903] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.931 [INFO][6903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" iface="eth0" netns="" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.931 [INFO][6903] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.931 [INFO][6903] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.990 [INFO][6910] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.991 [INFO][6910] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:05.991 [INFO][6910] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:06.005 [WARNING][6910] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:06.006 [INFO][6910] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:06.010 [INFO][6910] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:06.030758 containerd[2021]: 2026-04-17 23:37:06.014 [INFO][6903] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.030758 containerd[2021]: time="2026-04-17T23:37:06.030110321Z" level=info msg="TearDown network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\" successfully" Apr 17 23:37:06.030758 containerd[2021]: time="2026-04-17T23:37:06.030153425Z" level=info msg="StopPodSandbox for \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\" returns successfully" Apr 17 23:37:06.035300 containerd[2021]: time="2026-04-17T23:37:06.035242361Z" level=info msg="RemovePodSandbox for \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\"" Apr 17 23:37:06.035551 containerd[2021]: time="2026-04-17T23:37:06.035510765Z" level=info msg="Forcibly stopping sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\"" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.148 [WARNING][6924] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0", GenerateName:"calico-apiserver-574894c46f-", Namespace:"calico-system", SelfLink:"", UID:"20e438e6-a5f5-45d7-b808-1a4fb95924d1", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574894c46f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"9fe1524be3740d9bd3a2520be6a772ecd5272748d12be65800e0fd1d4d2d0ab3", Pod:"calico-apiserver-574894c46f-bcpkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif1c70744b6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.148 [INFO][6924] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.148 [INFO][6924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" iface="eth0" netns="" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.148 [INFO][6924] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.149 [INFO][6924] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.199 [INFO][6932] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.199 [INFO][6932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.203 [INFO][6932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.224 [WARNING][6932] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.224 [INFO][6932] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" HandleID="k8s-pod-network.03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Workload="ip--172--31--27--239-k8s-calico--apiserver--574894c46f--bcpkg-eth0" Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.229 [INFO][6932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:06.236649 containerd[2021]: 2026-04-17 23:37:06.233 [INFO][6924] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371" Apr 17 23:37:06.238099 containerd[2021]: time="2026-04-17T23:37:06.236680386Z" level=info msg="TearDown network for sandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\" successfully" Apr 17 23:37:06.244135 containerd[2021]: time="2026-04-17T23:37:06.244044450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:06.244325 containerd[2021]: time="2026-04-17T23:37:06.244160755Z" level=info msg="RemovePodSandbox \"03ca7a6e2260562ea4e0bf742b3a45eb7d821b44d404a83c268bed588ba71371\" returns successfully" Apr 17 23:37:06.245257 containerd[2021]: time="2026-04-17T23:37:06.244804579Z" level=info msg="StopPodSandbox for \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\"" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.330 [WARNING][6947] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6421d27f-ddc6-4d22-86e3-4278c749f598", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade", Pod:"coredns-674b8bbfcf-mh6rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65a365fcedd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.330 [INFO][6947] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.330 [INFO][6947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" iface="eth0" netns="" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.330 [INFO][6947] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.330 [INFO][6947] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.382 [INFO][6954] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.382 [INFO][6954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.382 [INFO][6954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.400 [WARNING][6954] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.400 [INFO][6954] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.403 [INFO][6954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:06.412236 containerd[2021]: 2026-04-17 23:37:06.406 [INFO][6947] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.413054 containerd[2021]: time="2026-04-17T23:37:06.412291867Z" level=info msg="TearDown network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\" successfully" Apr 17 23:37:06.413054 containerd[2021]: time="2026-04-17T23:37:06.412330123Z" level=info msg="StopPodSandbox for \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\" returns successfully" Apr 17 23:37:06.413177 containerd[2021]: time="2026-04-17T23:37:06.413133487Z" level=info msg="RemovePodSandbox for \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\"" Apr 17 23:37:06.413230 containerd[2021]: time="2026-04-17T23:37:06.413178259Z" level=info msg="Forcibly stopping sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\"" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.510 [WARNING][6968] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6421d27f-ddc6-4d22-86e3-4278c749f598", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"6ba569cb4b6916fed5a44c939844ff246702a5b7c3309d96893ab457a1ac2ade", Pod:"coredns-674b8bbfcf-mh6rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65a365fcedd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.511 [INFO][6968] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.511 [INFO][6968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" iface="eth0" netns="" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.511 [INFO][6968] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.511 [INFO][6968] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.561 [INFO][6976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.562 [INFO][6976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.562 [INFO][6976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.576 [WARNING][6976] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.576 [INFO][6976] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" HandleID="k8s-pod-network.10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Workload="ip--172--31--27--239-k8s-coredns--674b8bbfcf--mh6rz-eth0" Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.579 [INFO][6976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:06.591235 containerd[2021]: 2026-04-17 23:37:06.585 [INFO][6968] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82" Apr 17 23:37:06.591235 containerd[2021]: time="2026-04-17T23:37:06.591201404Z" level=info msg="TearDown network for sandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\" successfully" Apr 17 23:37:06.600292 containerd[2021]: time="2026-04-17T23:37:06.597913304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:06.600292 containerd[2021]: time="2026-04-17T23:37:06.598030904Z" level=info msg="RemovePodSandbox \"10f309a3833403343a4dca7d2328c8e8b3e9d7099b4f272e52fab9f9cf324f82\" returns successfully" Apr 17 23:37:06.600292 containerd[2021]: time="2026-04-17T23:37:06.599584892Z" level=info msg="StopPodSandbox for \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\"" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.695 [WARNING][6990] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba48e602-86c6-47be-a12c-378408003d1d", ResourceVersion:"1285", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776", Pod:"csi-node-driver-gq6hp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicda6542d937", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.695 [INFO][6990] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.695 [INFO][6990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" iface="eth0" netns="" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.695 [INFO][6990] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.695 [INFO][6990] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.743 [INFO][6997] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.743 [INFO][6997] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.743 [INFO][6997] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.760 [WARNING][6997] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.761 [INFO][6997] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.764 [INFO][6997] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:06.772000 containerd[2021]: 2026-04-17 23:37:06.768 [INFO][6990] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.773228 containerd[2021]: time="2026-04-17T23:37:06.772042101Z" level=info msg="TearDown network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\" successfully" Apr 17 23:37:06.773228 containerd[2021]: time="2026-04-17T23:37:06.772079685Z" level=info msg="StopPodSandbox for \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\" returns successfully" Apr 17 23:37:06.773481 containerd[2021]: time="2026-04-17T23:37:06.773411817Z" level=info msg="RemovePodSandbox for \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\"" Apr 17 23:37:06.773595 containerd[2021]: time="2026-04-17T23:37:06.773493957Z" level=info msg="Forcibly stopping sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\"" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.860 [WARNING][7011] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba48e602-86c6-47be-a12c-378408003d1d", ResourceVersion:"1285", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-239", ContainerID:"ee6e497736486df1b4e8100c5f0780e385b333a488517c764ec2f132c84e2776", Pod:"csi-node-driver-gq6hp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicda6542d937", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.860 [INFO][7011] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.860 [INFO][7011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" iface="eth0" netns="" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.860 [INFO][7011] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.860 [INFO][7011] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.925 [INFO][7019] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.926 [INFO][7019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.926 [INFO][7019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.956 [WARNING][7019] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.956 [INFO][7019] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" HandleID="k8s-pod-network.789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Workload="ip--172--31--27--239-k8s-csi--node--driver--gq6hp-eth0" Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.960 [INFO][7019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:37:06.970042 containerd[2021]: 2026-04-17 23:37:06.965 [INFO][7011] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad" Apr 17 23:37:06.972190 containerd[2021]: time="2026-04-17T23:37:06.970096918Z" level=info msg="TearDown network for sandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\" successfully" Apr 17 23:37:06.976430 containerd[2021]: time="2026-04-17T23:37:06.976295878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:37:06.976567 containerd[2021]: time="2026-04-17T23:37:06.976472794Z" level=info msg="RemovePodSandbox \"789dd312265668a1b6af0510464746dec290eef736d8cb480ad2b43747aa61ad\" returns successfully" Apr 17 23:37:07.514457 systemd[1]: Started sshd@19-172.31.27.239:22-4.175.71.9:35666.service - OpenSSH per-connection server daemon (4.175.71.9:35666). Apr 17 23:37:08.525556 sshd[7026]: Accepted publickey for core from 4.175.71.9 port 35666 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:08.529298 sshd[7026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:08.543918 systemd-logind[1998]: New session 20 of user core. Apr 17 23:37:08.551114 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:37:09.376189 sshd[7026]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:09.386596 systemd[1]: sshd@19-172.31.27.239:22-4.175.71.9:35666.service: Deactivated successfully. Apr 17 23:37:09.400336 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:37:09.404760 systemd-logind[1998]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:37:09.407967 systemd-logind[1998]: Removed session 20. Apr 17 23:37:14.553360 systemd[1]: Started sshd@20-172.31.27.239:22-4.175.71.9:35682.service - OpenSSH per-connection server daemon (4.175.71.9:35682). Apr 17 23:37:15.558691 sshd[7041]: Accepted publickey for core from 4.175.71.9 port 35682 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:15.561253 sshd[7041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:15.569359 systemd-logind[1998]: New session 21 of user core. Apr 17 23:37:15.580066 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:37:16.371153 sshd[7041]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:16.377430 systemd[1]: sshd@20-172.31.27.239:22-4.175.71.9:35682.service: Deactivated successfully. Apr 17 23:37:16.382528 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:37:16.384716 systemd-logind[1998]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:37:16.387485 systemd-logind[1998]: Removed session 21. Apr 17 23:37:21.557365 systemd[1]: Started sshd@21-172.31.27.239:22-4.175.71.9:45244.service - OpenSSH per-connection server daemon (4.175.71.9:45244). Apr 17 23:37:22.561262 sshd[7074]: Accepted publickey for core from 4.175.71.9 port 45244 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:37:22.564367 sshd[7074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:22.575886 systemd-logind[1998]: New session 22 of user core. Apr 17 23:37:22.579088 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:37:23.397987 sshd[7074]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:23.405613 systemd[1]: sshd@21-172.31.27.239:22-4.175.71.9:45244.service: Deactivated successfully. Apr 17 23:37:23.409695 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:37:23.411761 systemd-logind[1998]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:37:23.414884 systemd-logind[1998]: Removed session 22. Apr 17 23:37:37.236754 kubelet[3435]: E0417 23:37:37.235962 3435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-239?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 23:37:38.122119 systemd[1]: cri-containerd-de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa.scope: Deactivated successfully. Apr 17 23:37:38.124092 systemd[1]: cri-containerd-de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa.scope: Consumed 4.150s CPU time, 18.3M memory peak, 0B memory swap peak. Apr 17 23:37:38.173344 containerd[2021]: time="2026-04-17T23:37:38.173246869Z" level=info msg="shim disconnected" id=de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa namespace=k8s.io Apr 17 23:37:38.173344 containerd[2021]: time="2026-04-17T23:37:38.173334433Z" level=warning msg="cleaning up after shim disconnected" id=de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa namespace=k8s.io Apr 17 23:37:38.174009 containerd[2021]: time="2026-04-17T23:37:38.173360209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:38.178390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa-rootfs.mount: Deactivated successfully. Apr 17 23:37:38.253445 systemd[1]: cri-containerd-ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a.scope: Deactivated successfully. Apr 17 23:37:38.256059 systemd[1]: cri-containerd-ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a.scope: Consumed 27.223s CPU time. Apr 17 23:37:38.301223 containerd[2021]: time="2026-04-17T23:37:38.301055954Z" level=info msg="shim disconnected" id=ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a namespace=k8s.io Apr 17 23:37:38.301223 containerd[2021]: time="2026-04-17T23:37:38.301153070Z" level=warning msg="cleaning up after shim disconnected" id=ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a namespace=k8s.io Apr 17 23:37:38.301223 containerd[2021]: time="2026-04-17T23:37:38.301175186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:38.308684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a-rootfs.mount: Deactivated successfully. Apr 17 23:37:38.947210 kubelet[3435]: I0417 23:37:38.946119 3435 scope.go:117] "RemoveContainer" containerID="ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a" Apr 17 23:37:38.949984 containerd[2021]: time="2026-04-17T23:37:38.949920473Z" level=info msg="CreateContainer within sandbox \"4241a73979e70679144f8a0bd68ad73fd387375b071c44adaab6caed8d51d9fd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:37:38.954523 kubelet[3435]: I0417 23:37:38.953828 3435 scope.go:117] "RemoveContainer" containerID="de4b44ed3a446e616c640a7ae324696df426f7259e8933778579cd5bbfb839fa" Apr 17 23:37:38.962381 containerd[2021]: time="2026-04-17T23:37:38.961631705Z" level=info msg="CreateContainer within sandbox \"e8e2e3adc609ba51f3d661b109452f5ce13520a125fc57235d7ed030c51565ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:37:38.990178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886610052.mount: Deactivated successfully. Apr 17 23:37:38.991116 containerd[2021]: time="2026-04-17T23:37:38.991045781Z" level=info msg="CreateContainer within sandbox \"4241a73979e70679144f8a0bd68ad73fd387375b071c44adaab6caed8d51d9fd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4\"" Apr 17 23:37:38.993716 containerd[2021]: time="2026-04-17T23:37:38.993626945Z" level=info msg="StartContainer for \"8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4\"" Apr 17 23:37:39.008053 containerd[2021]: time="2026-04-17T23:37:39.007972177Z" level=info msg="CreateContainer within sandbox \"e8e2e3adc609ba51f3d661b109452f5ce13520a125fc57235d7ed030c51565ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"679aa8881d7229741e6814413a90def24305e6e947b934445a69bf51e3d4b224\"" Apr 17 23:37:39.009748 containerd[2021]: time="2026-04-17T23:37:39.009660481Z" level=info msg="StartContainer for \"679aa8881d7229741e6814413a90def24305e6e947b934445a69bf51e3d4b224\"" Apr 17 23:37:39.069636 systemd[1]: Started cri-containerd-8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4.scope - libcontainer container 8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4. Apr 17 23:37:39.089120 systemd[1]: Started cri-containerd-679aa8881d7229741e6814413a90def24305e6e947b934445a69bf51e3d4b224.scope - libcontainer container 679aa8881d7229741e6814413a90def24305e6e947b934445a69bf51e3d4b224. Apr 17 23:37:39.151881 containerd[2021]: time="2026-04-17T23:37:39.151602290Z" level=info msg="StartContainer for \"8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4\" returns successfully" Apr 17 23:37:39.198024 containerd[2021]: time="2026-04-17T23:37:39.195855410Z" level=info msg="StartContainer for \"679aa8881d7229741e6814413a90def24305e6e947b934445a69bf51e3d4b224\" returns successfully" Apr 17 23:37:43.699111 systemd[1]: cri-containerd-4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0.scope: Deactivated successfully. Apr 17 23:37:43.700438 systemd[1]: cri-containerd-4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0.scope: Consumed 3.216s CPU time, 16.2M memory peak, 0B memory swap peak. Apr 17 23:37:43.742362 containerd[2021]: time="2026-04-17T23:37:43.742257525Z" level=info msg="shim disconnected" id=4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0 namespace=k8s.io Apr 17 23:37:43.742362 containerd[2021]: time="2026-04-17T23:37:43.742350177Z" level=warning msg="cleaning up after shim disconnected" id=4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0 namespace=k8s.io Apr 17 23:37:43.745345 containerd[2021]: time="2026-04-17T23:37:43.742374597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:43.746415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0-rootfs.mount: Deactivated successfully. Apr 17 23:37:43.979921 kubelet[3435]: I0417 23:37:43.979297 3435 scope.go:117] "RemoveContainer" containerID="4e99231da1f6d09bfc4891469ae1b7c0357e714ce4488abb34815834c5d6c1e0" Apr 17 23:37:43.985576 containerd[2021]: time="2026-04-17T23:37:43.985023958Z" level=info msg="CreateContainer within sandbox \"74060ee81740f6cf9c731fcb8276900a69241d10fca14f4d03c14fb0e6b70411\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:37:44.011411 containerd[2021]: time="2026-04-17T23:37:44.011179062Z" level=info msg="CreateContainer within sandbox \"74060ee81740f6cf9c731fcb8276900a69241d10fca14f4d03c14fb0e6b70411\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c1e8decf8c9607caa898d9ceb876d1bcc4e1cb7dddc249130b829662f545e9d3\"" Apr 17 23:37:44.015827 containerd[2021]: time="2026-04-17T23:37:44.011998470Z" level=info msg="StartContainer for \"c1e8decf8c9607caa898d9ceb876d1bcc4e1cb7dddc249130b829662f545e9d3\"" Apr 17 23:37:44.084126 systemd[1]: Started cri-containerd-c1e8decf8c9607caa898d9ceb876d1bcc4e1cb7dddc249130b829662f545e9d3.scope - libcontainer container c1e8decf8c9607caa898d9ceb876d1bcc4e1cb7dddc249130b829662f545e9d3. Apr 17 23:37:44.148160 containerd[2021]: time="2026-04-17T23:37:44.147945163Z" level=info msg="StartContainer for \"c1e8decf8c9607caa898d9ceb876d1bcc4e1cb7dddc249130b829662f545e9d3\" returns successfully" Apr 17 23:37:47.237522 kubelet[3435]: E0417 23:37:47.236492 3435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-239?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 23:37:50.688503 systemd[1]: cri-containerd-8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4.scope: Deactivated successfully. Apr 17 23:37:50.729046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4-rootfs.mount: Deactivated successfully. Apr 17 23:37:50.739467 containerd[2021]: time="2026-04-17T23:37:50.739377316Z" level=info msg="shim disconnected" id=8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4 namespace=k8s.io Apr 17 23:37:50.739467 containerd[2021]: time="2026-04-17T23:37:50.739454536Z" level=warning msg="cleaning up after shim disconnected" id=8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4 namespace=k8s.io Apr 17 23:37:50.740367 containerd[2021]: time="2026-04-17T23:37:50.739479076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:51.004476 kubelet[3435]: I0417 23:37:51.003878 3435 scope.go:117] "RemoveContainer" containerID="ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a" Apr 17 23:37:51.004476 kubelet[3435]: I0417 23:37:51.004335 3435 scope.go:117] "RemoveContainer" containerID="8828c511adb9dbed7735c80f60d1e975bf7d66827a0aa938490ce75d3d35d2e4" Apr 17 23:37:51.005353 kubelet[3435]: E0417 23:37:51.004628 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-g58mf_tigera-operator(3e61b9aa-8eca-40c6-9529-8b5a7b4da676)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-g58mf" podUID="3e61b9aa-8eca-40c6-9529-8b5a7b4da676" Apr 17 23:37:51.007311 containerd[2021]: time="2026-04-17T23:37:51.006899677Z" level=info msg="RemoveContainer for \"ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a\"" Apr 17 23:37:51.014192 containerd[2021]: time="2026-04-17T23:37:51.014089813Z" level=info msg="RemoveContainer for \"ed7b00e2193d4c421e102b3f4812b87f867a90f1dd1e5d88720ac5e52363931a\" returns successfully"