Sep 12 23:54:52.255918 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 23:54:52.255963 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:54:52.255987 kernel: KASLR disabled due to lack of seed Sep 12 23:54:52.256004 kernel: efi: EFI v2.7 by EDK II Sep 12 23:54:52.256020 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 23:54:52.256036 kernel: ACPI: Early table checksum verification disabled Sep 12 23:54:52.256054 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 23:54:52.256069 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 23:54:52.256085 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 23:54:52.256101 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 23:54:52.256121 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 23:54:52.256137 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 23:54:52.256153 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 23:54:52.256168 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 23:54:52.256188 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 23:54:52.256208 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 23:54:52.256226 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 23:54:52.256243 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 23:54:52.256306 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 23:54:52.256329 kernel: printk: bootconsole [uart0] enabled Sep 12 23:54:52.256346 kernel: NUMA: Failed to initialise from firmware Sep 12 23:54:52.256364 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:54:52.256381 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 23:54:52.256398 kernel: Zone ranges: Sep 12 23:54:52.256415 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 23:54:52.256433 kernel: DMA32 empty Sep 12 23:54:52.256525 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 23:54:52.256543 kernel: Movable zone start for each node Sep 12 23:54:52.256560 kernel: Early memory node ranges Sep 12 23:54:52.256577 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 23:54:52.256594 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 23:54:52.256611 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 23:54:52.256628 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 23:54:52.256645 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 23:54:52.256662 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 23:54:52.256678 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 23:54:52.256695 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 23:54:52.256712 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:54:52.256735 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 23:54:52.256753 kernel: psci: probing for conduit method from ACPI. Sep 12 23:54:52.256777 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 23:54:52.256796 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:54:52.256813 kernel: psci: Trusted OS migration not required Sep 12 23:54:52.256835 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:54:52.256853 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 23:54:52.256871 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:54:52.256888 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:54:52.256907 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:54:52.256924 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:54:52.256942 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:54:52.256959 kernel: CPU features: detected: Spectre-v2 Sep 12 23:54:52.256977 kernel: CPU features: detected: Spectre-v3a Sep 12 23:54:52.256994 kernel: CPU features: detected: Spectre-BHB Sep 12 23:54:52.257012 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 23:54:52.257034 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 23:54:52.257051 kernel: alternatives: applying boot alternatives Sep 12 23:54:52.257071 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:54:52.257091 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:54:52.257109 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:54:52.257126 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:54:52.257144 kernel: Fallback order for Node 0: 0 Sep 12 23:54:52.257161 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 23:54:52.257179 kernel: Policy zone: Normal Sep 12 23:54:52.257196 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:54:52.257213 kernel: software IO TLB: area num 2. Sep 12 23:54:52.257235 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 23:54:52.257254 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 23:54:52.257271 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:54:52.257289 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:54:52.257307 kernel: rcu: RCU event tracing is enabled. Sep 12 23:54:52.257326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:54:52.257344 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:54:52.257361 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:54:52.257379 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:54:52.257396 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:54:52.257414 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:54:52.257465 kernel: GICv3: 96 SPIs implemented Sep 12 23:54:52.257490 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:54:52.257508 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:54:52.257525 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 23:54:52.257543 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 23:54:52.257561 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 23:54:52.257579 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:54:52.257597 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:54:52.257615 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 23:54:52.257632 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 23:54:52.257650 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 23:54:52.257668 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:54:52.257694 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 23:54:52.257712 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 23:54:52.257730 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 23:54:52.257748 kernel: Console: colour dummy device 80x25 Sep 12 23:54:52.257766 kernel: printk: console [tty1] enabled Sep 12 23:54:52.257784 kernel: ACPI: Core revision 20230628 Sep 12 23:54:52.257803 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 23:54:52.257821 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:54:52.257840 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:54:52.257862 kernel: landlock: Up and running. Sep 12 23:54:52.257881 kernel: SELinux: Initializing. Sep 12 23:54:52.257899 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:54:52.257917 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:54:52.257936 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:54:52.257954 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:54:52.257972 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:54:52.257990 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:54:52.258008 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 23:54:52.258031 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 23:54:52.258049 kernel: Remapping and enabling EFI services. Sep 12 23:54:52.258067 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:54:52.258085 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:54:52.258103 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 23:54:52.258122 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 23:54:52.258140 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 23:54:52.258158 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:54:52.258175 kernel: SMP: Total of 2 processors activated. Sep 12 23:54:52.258193 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:54:52.258216 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 23:54:52.258234 kernel: CPU features: detected: CRC32 instructions Sep 12 23:54:52.258263 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:54:52.258286 kernel: alternatives: applying system-wide alternatives Sep 12 23:54:52.258305 kernel: devtmpfs: initialized Sep 12 23:54:52.258325 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:54:52.258343 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:54:52.258362 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:54:52.258381 kernel: SMBIOS 3.0.0 present. Sep 12 23:54:52.258404 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 23:54:52.258423 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:54:52.258532 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:54:52.258556 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:54:52.258575 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:54:52.258594 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:54:52.258613 kernel: audit: type=2000 audit(0.292:1): state=initialized audit_enabled=0 res=1 Sep 12 23:54:52.258639 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:54:52.258659 kernel: cpuidle: using governor menu Sep 12 23:54:52.258677 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:54:52.258696 kernel: ASID allocator initialised with 65536 entries Sep 12 23:54:52.258715 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:54:52.258733 kernel: Serial: AMBA PL011 UART driver Sep 12 23:54:52.258752 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 23:54:52.258770 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:54:52.258789 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:54:52.258812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:54:52.258831 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:54:52.258850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:54:52.258869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:54:52.258887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:54:52.258906 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:54:52.258925 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:54:52.258943 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:54:52.258961 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:54:52.258984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:54:52.259004 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:54:52.259022 kernel: ACPI: Interpreter enabled Sep 12 23:54:52.259041 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:54:52.259061 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:54:52.259081 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 23:54:52.259421 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:54:52.259755 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:54:52.260045 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:54:52.260298 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 23:54:52.260570 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 23:54:52.260599 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 23:54:52.260619 kernel: acpiphp: Slot [1] registered Sep 12 23:54:52.260638 kernel: acpiphp: Slot [2] registered Sep 12 23:54:52.260657 kernel: acpiphp: Slot [3] registered Sep 12 23:54:52.260676 kernel: acpiphp: Slot [4] registered Sep 12 23:54:52.260703 kernel: acpiphp: Slot [5] registered Sep 12 23:54:52.260722 kernel: acpiphp: Slot [6] registered Sep 12 23:54:52.260741 kernel: acpiphp: Slot [7] registered Sep 12 23:54:52.260760 kernel: acpiphp: Slot [8] registered Sep 12 23:54:52.260779 kernel: acpiphp: Slot [9] registered Sep 12 23:54:52.260798 kernel: acpiphp: Slot [10] registered Sep 12 23:54:52.260816 kernel: acpiphp: Slot [11] registered Sep 12 23:54:52.260835 kernel: acpiphp: Slot [12] registered Sep 12 23:54:52.260854 kernel: acpiphp: Slot [13] registered Sep 12 23:54:52.260872 kernel: acpiphp: Slot [14] registered Sep 12 23:54:52.260896 kernel: acpiphp: Slot [15] registered Sep 12 23:54:52.260915 kernel: acpiphp: Slot [16] registered Sep 12 23:54:52.260933 kernel: acpiphp: Slot [17] registered Sep 12 23:54:52.260952 kernel: acpiphp: Slot [18] registered Sep 12 23:54:52.260971 kernel: acpiphp: Slot [19] registered Sep 12 23:54:52.260989 kernel: acpiphp: Slot [20] registered Sep 12 23:54:52.261007 kernel: acpiphp: Slot [21] registered Sep 12 23:54:52.261026 kernel: acpiphp: Slot [22] registered Sep 12 23:54:52.261044 kernel: acpiphp: Slot [23] registered Sep 12 23:54:52.261068 kernel: acpiphp: Slot [24] registered Sep 12 23:54:52.261087 kernel: acpiphp: Slot [25] registered Sep 12 23:54:52.261106 kernel: acpiphp: Slot [26] registered Sep 12 23:54:52.261125 kernel: acpiphp: Slot [27] registered Sep 12 23:54:52.261144 kernel: acpiphp: Slot [28] registered Sep 12 23:54:52.261163 kernel: acpiphp: Slot [29] registered Sep 12 23:54:52.261181 kernel: acpiphp: Slot [30] registered Sep 12 23:54:52.261200 kernel: acpiphp: Slot [31] registered Sep 12 23:54:52.261218 kernel: PCI host bridge to bus 0000:00 Sep 12 23:54:52.262001 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 23:54:52.262223 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:54:52.262417 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 23:54:52.262657 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 23:54:52.262902 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 23:54:52.263134 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 23:54:52.263351 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 23:54:52.263649 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 23:54:52.263873 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 23:54:52.264096 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:54:52.264361 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 23:54:52.264645 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 23:54:52.264865 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 23:54:52.265082 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 23:54:52.265289 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:54:52.266008 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 23:54:52.266245 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 23:54:52.266599 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 23:54:52.266873 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 23:54:52.267107 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 23:54:52.267313 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 23:54:52.267626 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:54:52.267842 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 23:54:52.267870 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:54:52.267891 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:54:52.267911 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:54:52.267931 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:54:52.267950 kernel: iommu: Default domain type: Translated Sep 12 23:54:52.267968 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:54:52.267996 kernel: efivars: Registered efivars operations Sep 12 23:54:52.268015 kernel: vgaarb: loaded Sep 12 23:54:52.268033 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:54:52.268052 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:54:52.268071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:54:52.268089 kernel: pnp: PnP ACPI init Sep 12 23:54:52.268351 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 23:54:52.268383 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:54:52.268409 kernel: NET: Registered PF_INET protocol family Sep 12 23:54:52.268429 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:54:52.268478 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:54:52.268500 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:54:52.268519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:54:52.268538 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:54:52.268559 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:54:52.268579 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:54:52.268598 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:54:52.268625 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:54:52.268643 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:54:52.268662 kernel: kvm [1]: HYP mode not available Sep 12 23:54:52.268681 kernel: Initialise system trusted keyrings Sep 12 23:54:52.268699 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:54:52.268718 kernel: Key type asymmetric registered Sep 12 23:54:52.268736 kernel: Asymmetric key parser 'x509' registered Sep 12 23:54:52.268754 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:54:52.268773 kernel: io scheduler mq-deadline registered Sep 12 23:54:52.268796 kernel: io scheduler kyber registered Sep 12 23:54:52.268815 kernel: io scheduler bfq registered Sep 12 23:54:52.269045 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 23:54:52.269074 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:54:52.269094 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:54:52.269113 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 23:54:52.269131 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 23:54:52.269150 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:54:52.269175 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 23:54:52.269382 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 23:54:52.269409 kernel: printk: console [ttyS0] disabled Sep 12 23:54:52.269428 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 23:54:52.269481 kernel: printk: console [ttyS0] enabled Sep 12 23:54:52.269502 kernel: printk: bootconsole [uart0] disabled Sep 12 23:54:52.269520 kernel: thunder_xcv, ver 1.0 Sep 12 23:54:52.269539 kernel: thunder_bgx, ver 1.0 Sep 12 23:54:52.269557 kernel: nicpf, ver 1.0 Sep 12 23:54:52.269582 kernel: nicvf, ver 1.0 Sep 12 23:54:52.269797 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:54:52.270051 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:54:51 UTC (1757721291) Sep 12 23:54:52.270079 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:54:52.270099 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 23:54:52.270118 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:54:52.270137 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:54:52.270155 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:54:52.270180 kernel: Segment Routing with IPv6 Sep 12 23:54:52.270199 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:54:52.270218 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:54:52.270236 kernel: Key type dns_resolver registered Sep 12 23:54:52.270255 kernel: registered taskstats version 1 Sep 12 23:54:52.270273 kernel: Loading compiled-in X.509 certificates Sep 12 23:54:52.270292 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:54:52.270311 kernel: Key type .fscrypt registered Sep 12 23:54:52.270329 kernel: Key type fscrypt-provisioning registered Sep 12 23:54:52.270352 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:54:52.270371 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:54:52.270390 kernel: ima: No architecture policies found Sep 12 23:54:52.270408 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:54:52.270427 kernel: clk: Disabling unused clocks Sep 12 23:54:52.270478 kernel: Freeing unused kernel memory: 39488K Sep 12 23:54:52.270501 kernel: Run /init as init process Sep 12 23:54:52.270520 kernel: with arguments: Sep 12 23:54:52.270539 kernel: /init Sep 12 23:54:52.270558 kernel: with environment: Sep 12 23:54:52.270584 kernel: HOME=/ Sep 12 23:54:52.270603 kernel: TERM=linux Sep 12 23:54:52.270622 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:54:52.270645 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:54:52.270670 systemd[1]: Detected virtualization amazon. Sep 12 23:54:52.270691 systemd[1]: Detected architecture arm64. Sep 12 23:54:52.270711 systemd[1]: Running in initrd. Sep 12 23:54:52.270736 systemd[1]: No hostname configured, using default hostname. Sep 12 23:54:52.270756 systemd[1]: Hostname set to . Sep 12 23:54:52.270777 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:54:52.270797 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:54:52.270818 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:54:52.270839 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:54:52.270860 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:54:52.270881 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:54:52.270907 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:54:52.270928 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:54:52.270952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:54:52.270974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:54:52.270994 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:54:52.271015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:54:52.271035 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:54:52.271060 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:54:52.271081 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:54:52.271101 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:54:52.271122 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:54:52.271142 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:54:52.271163 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:54:52.271183 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:54:52.271204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:54:52.271224 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:54:52.271250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:54:52.271271 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:54:52.271291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:54:52.271312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:54:52.271332 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:54:52.271352 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:54:52.271373 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:54:52.271393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:54:52.271418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:54:52.271473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:54:52.271499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:54:52.271520 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:54:52.271580 systemd-journald[249]: Collecting audit messages is disabled. Sep 12 23:54:52.271630 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:54:52.271651 systemd-journald[249]: Journal started Sep 12 23:54:52.271693 systemd-journald[249]: Runtime Journal (/run/log/journal/ec2861741484674a260305319b488dea) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:54:52.262018 systemd-modules-load[251]: Inserted module 'overlay' Sep 12 23:54:52.300185 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:54:52.303550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:52.320906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:54:52.334471 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:54:52.336286 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 12 23:54:52.339114 kernel: Bridge firewalling registered Sep 12 23:54:52.340093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:54:52.349348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:54:52.357718 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:54:52.359167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:54:52.382789 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:54:52.419531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:54:52.433829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:54:52.444190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:54:52.451868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:54:52.462515 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:54:52.476779 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:54:52.528233 dracut-cmdline[292]: dracut-dracut-053 Sep 12 23:54:52.534815 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:54:52.550260 systemd-resolved[290]: Positive Trust Anchors: Sep 12 23:54:52.550295 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:54:52.550359 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:54:52.709469 kernel: SCSI subsystem initialized Sep 12 23:54:52.716477 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:54:52.729483 kernel: iscsi: registered transport (tcp) Sep 12 23:54:52.751692 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:54:52.751765 kernel: QLogic iSCSI HBA Driver Sep 12 23:54:52.812475 kernel: random: crng init done Sep 12 23:54:52.813097 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 12 23:54:52.818590 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:54:52.829736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:54:52.843537 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:54:52.855868 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:54:52.893657 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:54:52.893734 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:54:52.895643 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:54:52.966530 kernel: raid6: neonx8 gen() 6729 MB/s Sep 12 23:54:52.983499 kernel: raid6: neonx4 gen() 6547 MB/s Sep 12 23:54:53.000499 kernel: raid6: neonx2 gen() 5443 MB/s Sep 12 23:54:53.017505 kernel: raid6: neonx1 gen() 3919 MB/s Sep 12 23:54:53.034495 kernel: raid6: int64x8 gen() 3804 MB/s Sep 12 23:54:53.051503 kernel: raid6: int64x4 gen() 3717 MB/s Sep 12 23:54:53.068498 kernel: raid6: int64x2 gen() 3590 MB/s Sep 12 23:54:53.086507 kernel: raid6: int64x1 gen() 2745 MB/s Sep 12 23:54:53.086587 kernel: raid6: using algorithm neonx8 gen() 6729 MB/s Sep 12 23:54:53.105506 kernel: raid6: .... xor() 4758 MB/s, rmw enabled Sep 12 23:54:53.105588 kernel: raid6: using neon recovery algorithm Sep 12 23:54:53.115103 kernel: xor: measuring software checksum speed Sep 12 23:54:53.115179 kernel: 8regs : 10646 MB/sec Sep 12 23:54:53.116330 kernel: 32regs : 11954 MB/sec Sep 12 23:54:53.118684 kernel: arm64_neon : 8979 MB/sec Sep 12 23:54:53.118757 kernel: xor: using function: 32regs (11954 MB/sec) Sep 12 23:54:53.205500 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:54:53.228522 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:54:53.243789 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:54:53.296638 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 12 23:54:53.305799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:54:53.318854 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:54:53.356176 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 12 23:54:53.422568 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:54:53.438887 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:54:53.556474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:54:53.573675 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:54:53.605113 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:54:53.616197 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:54:53.620198 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:54:53.634484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:54:53.649690 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:54:53.704736 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:54:53.792667 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:54:53.792752 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 23:54:53.796905 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 23:54:53.797306 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 23:54:53.810491 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:94:ac:7f:66:3d Sep 12 23:54:53.812425 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:54:53.834911 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:54:53.840531 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:54:53.851702 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:54:53.858247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:54:53.858608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:53.865018 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:54:53.880470 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 23:54:53.883479 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 23:54:53.887912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:54:53.902483 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 23:54:53.911920 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:54:53.911992 kernel: GPT:9289727 != 16777215 Sep 12 23:54:53.913283 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:54:53.914162 kernel: GPT:9289727 != 16777215 Sep 12 23:54:53.915314 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:54:53.916313 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:53.920501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:53.934873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:54:53.974033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:54:54.064483 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (516) Sep 12 23:54:54.093529 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (538) Sep 12 23:54:54.138979 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 23:54:54.199144 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 23:54:54.220471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:54:54.246856 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 23:54:54.255749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 23:54:54.281806 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:54:54.302566 disk-uuid[662]: Primary Header is updated. Sep 12 23:54:54.302566 disk-uuid[662]: Secondary Entries is updated. Sep 12 23:54:54.302566 disk-uuid[662]: Secondary Header is updated. Sep 12 23:54:54.313669 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:54.321489 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:54.332487 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:55.339508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:55.340989 disk-uuid[663]: The operation has completed successfully. Sep 12 23:54:55.550809 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:54:55.552566 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:54:55.604830 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:54:55.620908 sh[1004]: Success Sep 12 23:54:55.645292 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:54:55.764341 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:54:55.784635 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:54:55.789468 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:54:55.837946 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:54:55.838047 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:55.838090 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:54:55.839931 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:54:55.841326 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:54:55.928520 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 23:54:55.954753 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:54:55.962430 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:54:55.976808 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:54:55.988132 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:54:56.024132 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:56.024227 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:56.024284 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:54:56.047501 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:54:56.071607 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:54:56.079495 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:56.090032 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:54:56.106807 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:54:56.221940 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:54:56.234799 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:54:56.310629 systemd-networkd[1196]: lo: Link UP Sep 12 23:54:56.310649 systemd-networkd[1196]: lo: Gained carrier Sep 12 23:54:56.318959 systemd-networkd[1196]: Enumeration completed Sep 12 23:54:56.319189 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:54:56.322271 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:54:56.322279 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:54:56.324701 systemd[1]: Reached target network.target - Network. Sep 12 23:54:56.330851 systemd-networkd[1196]: eth0: Link UP Sep 12 23:54:56.330861 systemd-networkd[1196]: eth0: Gained carrier Sep 12 23:54:56.330882 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:54:56.372602 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.20.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:54:56.570237 ignition[1121]: Ignition 2.19.0 Sep 12 23:54:56.570507 ignition[1121]: Stage: fetch-offline Sep 12 23:54:56.572224 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:56.572273 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:56.585214 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:54:56.576790 ignition[1121]: Ignition finished successfully Sep 12 23:54:56.605935 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:54:56.643113 ignition[1206]: Ignition 2.19.0 Sep 12 23:54:56.643143 ignition[1206]: Stage: fetch Sep 12 23:54:56.643944 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:56.643974 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:56.644151 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:56.658784 ignition[1206]: PUT result: OK Sep 12 23:54:56.668655 ignition[1206]: parsed url from cmdline: "" Sep 12 23:54:56.668673 ignition[1206]: no config URL provided Sep 12 23:54:56.668692 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:54:56.668721 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:54:56.668759 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:56.672577 ignition[1206]: PUT result: OK Sep 12 23:54:56.672689 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 23:54:56.682543 ignition[1206]: GET result: OK Sep 12 23:54:56.682770 ignition[1206]: parsing config with SHA512: b27dafb79d585d75b2f80adfdbac85d8cc089628a487682ead33c6737a2217f8ea32f25b3a50ca9c024b20b4eae01d8d56020cd01c899e0ada15f6920bcc90e9 Sep 12 23:54:56.697001 unknown[1206]: fetched base config from "system" Sep 12 23:54:56.697749 ignition[1206]: fetch: fetch complete Sep 12 23:54:56.697021 unknown[1206]: fetched base config from "system" Sep 12 23:54:56.697763 ignition[1206]: fetch: fetch passed Sep 12 23:54:56.697036 unknown[1206]: fetched user config from "aws" Sep 12 23:54:56.697879 ignition[1206]: Ignition finished successfully Sep 12 23:54:56.711185 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:54:56.724937 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:54:56.759689 ignition[1212]: Ignition 2.19.0 Sep 12 23:54:56.759724 ignition[1212]: Stage: kargs Sep 12 23:54:56.761983 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:56.762038 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:56.762230 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:56.772884 ignition[1212]: PUT result: OK Sep 12 23:54:56.778267 ignition[1212]: kargs: kargs passed Sep 12 23:54:56.778677 ignition[1212]: Ignition finished successfully Sep 12 23:54:56.785043 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:54:56.808935 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:54:56.835776 ignition[1218]: Ignition 2.19.0 Sep 12 23:54:56.836338 ignition[1218]: Stage: disks Sep 12 23:54:56.837030 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:56.837056 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:56.837216 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:56.847719 ignition[1218]: PUT result: OK Sep 12 23:54:56.856134 ignition[1218]: disks: disks passed Sep 12 23:54:56.856261 ignition[1218]: Ignition finished successfully Sep 12 23:54:56.862472 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:54:56.862999 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:54:56.871060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:54:56.874192 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:54:56.876775 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:54:56.882130 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:54:56.901715 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:54:56.945082 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 23:54:56.951326 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:54:56.964769 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:54:57.043479 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:54:57.043998 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:54:57.048550 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:54:57.064741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:54:57.075667 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:54:57.078278 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:54:57.078368 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:54:57.078420 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:54:57.096120 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:54:57.107047 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:54:57.126494 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1246) Sep 12 23:54:57.130512 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:57.130566 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:57.132019 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:54:57.145473 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:54:57.147741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:54:57.493004 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:54:57.504906 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:54:57.515991 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:54:57.525868 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:54:57.578794 systemd-networkd[1196]: eth0: Gained IPv6LL Sep 12 23:54:57.853783 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:54:57.867774 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:54:57.885905 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:54:57.898847 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:57.901168 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:54:57.947495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:54:57.962043 ignition[1359]: INFO : Ignition 2.19.0 Sep 12 23:54:57.964638 ignition[1359]: INFO : Stage: mount Sep 12 23:54:57.964638 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:57.964638 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:57.964638 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:57.976966 ignition[1359]: INFO : PUT result: OK Sep 12 23:54:57.982247 ignition[1359]: INFO : mount: mount passed Sep 12 23:54:57.984641 ignition[1359]: INFO : Ignition finished successfully Sep 12 23:54:57.986718 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:54:58.017758 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:54:58.055067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:54:58.078496 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1371) Sep 12 23:54:58.083194 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:58.083264 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:58.083292 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:54:58.089487 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:54:58.093910 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:54:58.133514 ignition[1388]: INFO : Ignition 2.19.0 Sep 12 23:54:58.133514 ignition[1388]: INFO : Stage: files Sep 12 23:54:58.139269 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:58.139269 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:58.139269 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:58.149874 ignition[1388]: INFO : PUT result: OK Sep 12 23:54:58.155483 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:54:58.169755 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:54:58.169755 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:54:58.199736 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:54:58.204709 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:54:58.209114 unknown[1388]: wrote ssh authorized keys file for user: core Sep 12 23:54:58.212061 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:54:58.222398 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 23:54:58.222398 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 23:54:58.348601 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:54:58.664660 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:54:58.673215 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 23:54:59.024714 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:54:59.406266 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 23:54:59.406266 ignition[1388]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:54:59.415962 ignition[1388]: INFO : files: files passed Sep 12 23:54:59.415962 ignition[1388]: INFO : Ignition finished successfully Sep 12 23:54:59.432967 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:54:59.451892 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:54:59.457071 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:54:59.476179 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:54:59.478492 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:54:59.503530 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:54:59.507913 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:54:59.512112 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:54:59.519937 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:54:59.523944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:54:59.537868 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:54:59.606081 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:54:59.606310 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:54:59.615981 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:54:59.618962 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:54:59.621909 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:54:59.636928 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:54:59.669834 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:54:59.682071 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:54:59.717877 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:54:59.722591 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:54:59.732254 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:54:59.735332 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:54:59.736037 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:54:59.744060 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:54:59.751074 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:54:59.758376 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:54:59.765582 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:54:59.772249 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:54:59.781574 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:54:59.786942 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:54:59.795898 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:54:59.803471 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:54:59.807158 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:54:59.809823 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:54:59.810063 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:54:59.820411 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:54:59.823720 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:54:59.829734 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:54:59.833766 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:54:59.839761 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:54:59.840793 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:54:59.860228 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:54:59.860578 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:54:59.866833 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:54:59.867074 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:54:59.901858 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:54:59.910992 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:54:59.918824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:54:59.919266 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:54:59.933883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:54:59.934231 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:54:59.958286 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:54:59.960982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:54:59.985520 ignition[1440]: INFO : Ignition 2.19.0 Sep 12 23:54:59.985520 ignition[1440]: INFO : Stage: umount Sep 12 23:54:59.985520 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:59.985520 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:55:00.004231 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:55:00.004231 ignition[1440]: INFO : PUT result: OK Sep 12 23:54:59.989413 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:55:00.021821 ignition[1440]: INFO : umount: umount passed Sep 12 23:55:00.021821 ignition[1440]: INFO : Ignition finished successfully Sep 12 23:55:00.016422 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:55:00.016761 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:55:00.026254 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:55:00.026536 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:55:00.028874 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:55:00.028972 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:55:00.029489 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:55:00.029570 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:55:00.029904 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:55:00.029981 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:55:00.030319 systemd[1]: Stopped target network.target - Network. Sep 12 23:55:00.031164 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:55:00.031249 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:55:00.031708 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:55:00.032098 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:55:00.046001 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:55:00.046135 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:55:00.060137 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:55:00.062543 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:55:00.062629 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:55:00.065669 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:55:00.065755 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:55:00.068593 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:55:00.068695 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:55:00.071593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:55:00.071701 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:55:00.074697 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:55:00.074806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:55:00.077742 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:55:00.080883 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:55:00.121919 systemd-networkd[1196]: eth0: DHCPv6 lease lost Sep 12 23:55:00.138221 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:55:00.138487 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:55:00.147070 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:55:00.147371 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:55:00.154510 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:55:00.154622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:55:00.172756 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:55:00.196669 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:55:00.196820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:55:00.207323 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:55:00.207540 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:55:00.216059 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:55:00.216182 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:55:00.219002 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:55:00.219103 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:55:00.226431 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:55:00.262847 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:55:00.263386 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:55:00.273303 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:55:00.273793 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:55:00.280710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:55:00.280813 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:55:00.283941 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:55:00.284025 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:55:00.290487 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:55:00.290602 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:55:00.293952 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:55:00.294064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:55:00.296919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:55:00.297020 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:55:00.336304 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:55:00.338940 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:55:00.339066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:55:00.342552 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:55:00.342656 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:55:00.345879 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:55:00.345976 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:55:00.349346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:55:00.349467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:55:00.393195 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:55:00.393731 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:55:00.405993 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:55:00.426658 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:55:00.444348 systemd[1]: Switching root. Sep 12 23:55:00.509296 systemd-journald[249]: Journal stopped Sep 12 23:55:03.527498 systemd-journald[249]: Received SIGTERM from PID 1 (systemd). Sep 12 23:55:03.527660 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:55:03.527709 kernel: SELinux: policy capability open_perms=1 Sep 12 23:55:03.527742 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:55:03.527779 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:55:03.527811 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:55:03.527841 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:55:03.527872 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:55:03.527904 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:55:03.527934 kernel: audit: type=1403 audit(1757721301.047:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:55:03.527982 systemd[1]: Successfully loaded SELinux policy in 108.612ms. Sep 12 23:55:03.528031 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.164ms. Sep 12 23:55:03.528068 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:55:03.528103 systemd[1]: Detected virtualization amazon. Sep 12 23:55:03.528137 systemd[1]: Detected architecture arm64. Sep 12 23:55:03.528169 systemd[1]: Detected first boot. Sep 12 23:55:03.528240 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:55:03.528277 zram_generator::config[1483]: No configuration found. Sep 12 23:55:03.528327 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:55:03.528362 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:55:03.528404 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:55:03.528480 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:55:03.528529 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:55:03.528565 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:55:03.528599 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:55:03.528634 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:55:03.528666 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:55:03.528706 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:55:03.528742 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:55:03.528778 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:55:03.528809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:55:03.528842 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:55:03.528876 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:55:03.528906 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:55:03.528941 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:55:03.528979 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:55:03.529010 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:55:03.529044 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:55:03.529074 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:55:03.529107 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:55:03.529140 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:55:03.529179 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:55:03.529213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:55:03.529251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:55:03.529283 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:55:03.529316 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:55:03.529346 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:55:03.529379 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:55:03.529411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:55:03.531534 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:55:03.531602 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:55:03.531634 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:55:03.531679 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:55:03.531713 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:55:03.531744 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:55:03.531774 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:55:03.531808 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:55:03.531840 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:55:03.531871 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:55:03.531903 systemd[1]: Reached target machines.target - Containers. Sep 12 23:55:03.531933 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:55:03.531967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:03.532002 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:55:03.532032 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:55:03.532062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:55:03.532093 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:55:03.532124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:55:03.532156 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:55:03.533667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:55:03.533746 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:55:03.533779 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:55:03.533813 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:55:03.533843 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:55:03.533878 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:55:03.533908 kernel: fuse: init (API version 7.39) Sep 12 23:55:03.533940 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:55:03.533970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:55:03.534000 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:55:03.534035 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:55:03.534066 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:55:03.534098 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:55:03.534128 systemd[1]: Stopped verity-setup.service. Sep 12 23:55:03.534158 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:55:03.534189 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:55:03.534221 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:55:03.534253 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:55:03.534285 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:55:03.534322 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:55:03.534352 kernel: loop: module loaded Sep 12 23:55:03.534381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:55:03.534411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:55:03.534474 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:55:03.534515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:55:03.534547 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:55:03.534577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:55:03.534613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:55:03.534644 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:55:03.534674 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:55:03.534705 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:55:03.534735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:55:03.534769 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:55:03.534800 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:55:03.534830 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:55:03.534860 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:55:03.534891 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:55:03.534925 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:55:03.534961 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:55:03.534990 kernel: ACPI: bus type drm_connector registered Sep 12 23:55:03.535019 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:55:03.535049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:03.535080 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:55:03.535110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:55:03.535193 systemd-journald[1565]: Collecting audit messages is disabled. Sep 12 23:55:03.535259 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:55:03.535293 systemd-journald[1565]: Journal started Sep 12 23:55:03.535340 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec2861741484674a260305319b488dea) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:55:02.737935 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:55:02.767608 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 23:55:02.768399 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:55:03.548493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:55:03.562211 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:55:03.573096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:55:03.584568 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:55:03.588852 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:55:03.589199 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:55:03.593550 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:55:03.597493 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:55:03.601107 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:55:03.605472 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:55:03.613484 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:55:03.697202 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:55:03.713362 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:55:03.717023 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:55:03.734807 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:55:03.747885 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:55:03.763249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:55:03.777580 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 23:55:03.769201 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:55:03.794915 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Sep 12 23:55:03.794961 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Sep 12 23:55:03.805734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:55:03.807595 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:55:03.842071 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec2861741484674a260305319b488dea is 100.136ms for 916 entries. Sep 12 23:55:03.842071 systemd-journald[1565]: System Journal (/var/log/journal/ec2861741484674a260305319b488dea) is 8.0M, max 195.6M, 187.6M free. Sep 12 23:55:03.962015 systemd-journald[1565]: Received client request to flush runtime journal. Sep 12 23:55:03.962119 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:55:03.962171 kernel: loop1: detected capacity change from 0 to 52536 Sep 12 23:55:03.846228 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:55:03.859422 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:55:03.904574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:55:03.970393 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:55:04.003012 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:55:04.014728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:55:04.040554 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:55:04.056982 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:55:04.082496 kernel: loop2: detected capacity change from 0 to 207008 Sep 12 23:55:04.123192 udevadm[1634]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:55:04.134335 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Sep 12 23:55:04.134398 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Sep 12 23:55:04.162196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:55:04.166759 kernel: loop3: detected capacity change from 0 to 114328 Sep 12 23:55:04.280521 kernel: loop4: detected capacity change from 0 to 114432 Sep 12 23:55:04.305589 kernel: loop5: detected capacity change from 0 to 52536 Sep 12 23:55:04.329704 kernel: loop6: detected capacity change from 0 to 207008 Sep 12 23:55:04.356526 kernel: loop7: detected capacity change from 0 to 114328 Sep 12 23:55:04.367337 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 23:55:04.368653 (sd-merge)[1641]: Merged extensions into '/usr'. Sep 12 23:55:04.380107 systemd[1]: Reloading requested from client PID 1590 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:55:04.380142 systemd[1]: Reloading... Sep 12 23:55:04.607492 zram_generator::config[1663]: No configuration found. Sep 12 23:55:04.889556 ldconfig[1583]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:55:04.983964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:05.114840 systemd[1]: Reloading finished in 733 ms. Sep 12 23:55:05.154590 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:55:05.167613 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:55:05.185915 systemd[1]: Starting ensure-sysext.service... Sep 12 23:55:05.201758 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:55:05.246788 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:55:05.246835 systemd[1]: Reloading... Sep 12 23:55:05.263389 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:55:05.264216 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:55:05.277774 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:55:05.282610 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 23:55:05.282835 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 23:55:05.304119 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:55:05.304158 systemd-tmpfiles[1721]: Skipping /boot Sep 12 23:55:05.366187 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:55:05.368697 systemd-tmpfiles[1721]: Skipping /boot Sep 12 23:55:05.466499 zram_generator::config[1747]: No configuration found. Sep 12 23:55:05.729101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:05.854088 systemd[1]: Reloading finished in 606 ms. Sep 12 23:55:05.883614 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:55:05.895648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:55:05.918027 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:05.933760 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:55:05.942563 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:55:05.964006 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:55:05.974851 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:55:05.983153 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:55:05.998370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:06.014166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:55:06.020199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:55:06.031042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:55:06.034289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:06.042679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:06.043137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:06.044405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:55:06.047579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:55:06.066266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:55:06.082082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:55:06.088310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:55:06.091163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:55:06.091643 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:55:06.113119 systemd[1]: Finished ensure-sysext.service. Sep 12 23:55:06.134290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:55:06.137858 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:55:06.142010 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:55:06.142980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:55:06.186991 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:55:06.198535 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:55:06.206973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:55:06.209568 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:55:06.213813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:55:06.225199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:55:06.226245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:55:06.230768 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:55:06.250328 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:55:06.252594 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:55:06.291325 systemd-udevd[1807]: Using default interface naming scheme 'v255'. Sep 12 23:55:06.303476 augenrules[1838]: No rules Sep 12 23:55:06.306853 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:06.310676 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:55:06.321219 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:55:06.328142 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:55:06.359919 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:55:06.371157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:55:06.388902 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:55:06.584430 systemd-networkd[1851]: lo: Link UP Sep 12 23:55:06.588537 systemd-networkd[1851]: lo: Gained carrier Sep 12 23:55:06.590005 systemd-networkd[1851]: Enumeration completed Sep 12 23:55:06.590430 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:55:06.609316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:55:06.626078 systemd-resolved[1806]: Positive Trust Anchors: Sep 12 23:55:06.626122 systemd-resolved[1806]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:55:06.626189 systemd-resolved[1806]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:55:06.652334 systemd-resolved[1806]: Defaulting to hostname 'linux'. Sep 12 23:55:06.656065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:55:06.659540 systemd[1]: Reached target network.target - Network. Sep 12 23:55:06.662000 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:55:06.753907 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 23:55:06.784188 (udev-worker)[1859]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:06.877268 systemd-networkd[1851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:55:06.877301 systemd-networkd[1851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:55:06.882140 systemd-networkd[1851]: eth0: Link UP Sep 12 23:55:06.882644 systemd-networkd[1851]: eth0: Gained carrier Sep 12 23:55:06.882700 systemd-networkd[1851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:55:06.910486 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1861) Sep 12 23:55:06.911650 systemd-networkd[1851]: eth0: DHCPv4 address 172.31.20.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:55:07.209110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:55:07.226775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:55:07.233544 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:55:07.251076 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:55:07.260055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:55:07.287791 lvm[1970]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:55:07.316840 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:55:07.326683 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:55:07.328059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:55:07.340738 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:55:07.361780 lvm[1976]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:55:07.391202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:55:07.395743 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:55:07.399965 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:55:07.403571 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:55:07.407526 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:55:07.410530 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:55:07.414381 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:55:07.418139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:55:07.418229 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:55:07.420621 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:55:07.424923 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:55:07.431419 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:55:07.441157 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:55:07.445961 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:55:07.449822 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:55:07.454328 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:55:07.457707 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:55:07.460726 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:55:07.460797 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:55:07.469854 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:55:07.481899 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:55:07.490931 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:55:07.504741 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:55:07.525304 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:55:07.533613 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:55:07.548933 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:55:07.556862 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 23:55:07.564722 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:55:07.569278 jq[1986]: false Sep 12 23:55:07.584617 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 23:55:07.588778 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:55:07.596373 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:55:07.619764 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:55:07.625721 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:55:07.628890 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:55:07.639797 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:55:07.648767 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:55:07.659059 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:55:07.659540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:55:07.708410 dbus-daemon[1985]: [system] SELinux support is enabled Sep 12 23:55:07.709644 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:55:07.722936 extend-filesystems[1987]: Found loop4 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found loop5 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found loop6 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found loop7 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found nvme0n1 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found nvme0n1p1 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found nvme0n1p2 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found nvme0n1p3 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found usr Sep 12 23:55:07.726494 extend-filesystems[1987]: Found nvme0n1p4 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found nvme0n1p6 Sep 12 23:55:07.726494 extend-filesystems[1987]: Found nvme0n1p7 Sep 12 23:55:07.789603 extend-filesystems[1987]: Found nvme0n1p9 Sep 12 23:55:07.789603 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Sep 12 23:55:07.769230 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:55:07.748023 dbus-daemon[1985]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1851 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 23:55:07.769299 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:55:07.782730 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:55:07.782775 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:55:07.799693 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 23:55:07.811026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:55:07.812591 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:55:07.835333 jq[2000]: true Sep 12 23:55:07.850081 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: ---------------------------------------------------- Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: corporation. Support and training for ntp-4 are Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: available at https://www.nwtime.org/support Sep 12 23:55:07.856515 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: ---------------------------------------------------- Sep 12 23:55:07.850133 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:55:07.850154 ntpd[1990]: ---------------------------------------------------- Sep 12 23:55:07.861264 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: proto: precision = 0.096 usec (-23) Sep 12 23:55:07.850174 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:55:07.850193 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:55:07.850213 ntpd[1990]: corporation. Support and training for ntp-4 are Sep 12 23:55:07.850233 ntpd[1990]: available at https://www.nwtime.org/support Sep 12 23:55:07.850253 ntpd[1990]: ---------------------------------------------------- Sep 12 23:55:07.859750 ntpd[1990]: proto: precision = 0.096 usec (-23) Sep 12 23:55:07.862118 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 23:55:07.869762 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: basedate set to 2025-08-31 Sep 12 23:55:07.869762 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: gps base set to 2025-08-31 (week 2382) Sep 12 23:55:07.866891 ntpd[1990]: basedate set to 2025-08-31 Sep 12 23:55:07.867788 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:55:07.866954 ntpd[1990]: gps base set to 2025-08-31 (week 2382) Sep 12 23:55:07.871516 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:55:07.876272 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Listen normally on 3 eth0 172.31.20.162:123 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Listen normally on 4 lo [::1]:123 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: bind(21) AF_INET6 fe80::494:acff:fe7f:663d%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: unable to create socket on eth0 (5) for fe80::494:acff:fe7f:663d%2#123 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: failed to init interface for address fe80::494:acff:fe7f:663d%2 Sep 12 23:55:07.880605 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Sep 12 23:55:07.878663 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:55:07.879132 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:55:07.879244 ntpd[1990]: Listen normally on 3 eth0 172.31.20.162:123 Sep 12 23:55:07.879347 ntpd[1990]: Listen normally on 4 lo [::1]:123 Sep 12 23:55:07.879523 ntpd[1990]: bind(21) AF_INET6 fe80::494:acff:fe7f:663d%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:55:07.879595 ntpd[1990]: unable to create socket on eth0 (5) for fe80::494:acff:fe7f:663d%2#123 Sep 12 23:55:07.879630 ntpd[1990]: failed to init interface for address fe80::494:acff:fe7f:663d%2 Sep 12 23:55:07.879707 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Sep 12 23:55:07.897195 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:55:07.898570 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:55:07.898817 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:55:07.901280 ntpd[1990]: 12 Sep 23:55:07 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:55:07.909356 jq[2022]: true Sep 12 23:55:07.914973 tar[2002]: linux-arm64/LICENSE Sep 12 23:55:07.915674 tar[2002]: linux-arm64/helm Sep 12 23:55:07.928865 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Sep 12 23:55:07.950431 extend-filesystems[2032]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:55:07.971604 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 23:55:08.037168 (ntainerd)[2027]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:55:08.064484 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 23:55:08.058578 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 23:55:08.091624 extend-filesystems[2032]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 23:55:08.091624 extend-filesystems[2032]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:55:08.091624 extend-filesystems[2032]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 23:55:08.112527 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Sep 12 23:55:08.095765 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:55:08.097982 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:55:08.129227 update_engine[1998]: I20250912 23:55:08.118076 1998 main.cc:92] Flatcar Update Engine starting Sep 12 23:55:08.129227 update_engine[1998]: I20250912 23:55:08.127615 1998 update_check_scheduler.cc:74] Next update check in 9m13s Sep 12 23:55:08.123021 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:55:08.135061 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:55:08.277028 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:55:08.285690 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1860) Sep 12 23:55:08.281681 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:55:08.301724 systemd[1]: Starting sshkeys.service... Sep 12 23:55:08.362639 coreos-metadata[1984]: Sep 12 23:55:08.362 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:55:08.371252 coreos-metadata[1984]: Sep 12 23:55:08.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 23:55:08.374318 coreos-metadata[1984]: Sep 12 23:55:08.374 INFO Fetch successful Sep 12 23:55:08.374318 coreos-metadata[1984]: Sep 12 23:55:08.374 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 23:55:08.379015 coreos-metadata[1984]: Sep 12 23:55:08.378 INFO Fetch successful Sep 12 23:55:08.379015 coreos-metadata[1984]: Sep 12 23:55:08.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 23:55:08.379015 coreos-metadata[1984]: Sep 12 23:55:08.378 INFO Fetch successful Sep 12 23:55:08.381194 coreos-metadata[1984]: Sep 12 23:55:08.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 23:55:08.381194 coreos-metadata[1984]: Sep 12 23:55:08.380 INFO Fetch successful Sep 12 23:55:08.381194 coreos-metadata[1984]: Sep 12 23:55:08.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 23:55:08.381194 coreos-metadata[1984]: Sep 12 23:55:08.380 INFO Fetch failed with 404: resource not found Sep 12 23:55:08.381194 coreos-metadata[1984]: Sep 12 23:55:08.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 23:55:08.381194 coreos-metadata[1984]: Sep 12 23:55:08.382 INFO Fetch successful Sep 12 23:55:08.381194 coreos-metadata[1984]: Sep 12 23:55:08.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 23:55:08.395514 coreos-metadata[1984]: Sep 12 23:55:08.393 INFO Fetch successful Sep 12 23:55:08.395514 coreos-metadata[1984]: Sep 12 23:55:08.394 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 23:55:08.398499 coreos-metadata[1984]: Sep 12 23:55:08.396 INFO Fetch successful Sep 12 23:55:08.398499 coreos-metadata[1984]: Sep 12 23:55:08.396 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 23:55:08.401754 coreos-metadata[1984]: Sep 12 23:55:08.401 INFO Fetch successful Sep 12 23:55:08.401754 coreos-metadata[1984]: Sep 12 23:55:08.401 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 23:55:08.408929 coreos-metadata[1984]: Sep 12 23:55:08.408 INFO Fetch successful Sep 12 23:55:08.474830 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 23:55:08.483051 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 23:55:08.542616 systemd-logind[1997]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:55:08.542691 systemd-logind[1997]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 23:55:08.549751 systemd-logind[1997]: New seat seat0. Sep 12 23:55:08.559899 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:55:08.572931 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:55:08.577707 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:55:08.587567 systemd-networkd[1851]: eth0: Gained IPv6LL Sep 12 23:55:08.598526 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:55:08.604028 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:55:08.635704 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 23:55:08.645956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:08.655165 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:55:08.867314 locksmithd[2047]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:55:08.979072 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 23:55:08.979364 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 23:55:08.986718 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 23:55:09.034837 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 23:55:09.041580 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:55:09.055426 containerd[2027]: time="2025-09-12T23:55:09.053089235Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:55:09.065364 coreos-metadata[2078]: Sep 12 23:55:09.063 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:55:09.074784 coreos-metadata[2078]: Sep 12 23:55:09.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 23:55:09.074784 coreos-metadata[2078]: Sep 12 23:55:09.069 INFO Fetch successful Sep 12 23:55:09.074784 coreos-metadata[2078]: Sep 12 23:55:09.069 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 23:55:09.074784 coreos-metadata[2078]: Sep 12 23:55:09.070 INFO Fetch successful Sep 12 23:55:09.083365 unknown[2078]: wrote ssh authorized keys file for user: core Sep 12 23:55:09.127802 amazon-ssm-agent[2099]: Initializing new seelog logger Sep 12 23:55:09.135330 amazon-ssm-agent[2099]: New Seelog Logger Creation Complete Sep 12 23:55:09.135330 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.135330 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.146680 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 processing appconfig overrides Sep 12 23:55:09.153784 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.153784 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.154312 polkitd[2153]: Started polkitd version 121 Sep 12 23:55:09.157327 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 processing appconfig overrides Sep 12 23:55:09.157327 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO Proxy environment variables: Sep 12 23:55:09.164267 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.164267 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.164267 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 processing appconfig overrides Sep 12 23:55:09.183762 update-ssh-keys[2163]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:55:09.178622 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 23:55:09.191349 systemd[1]: Finished sshkeys.service. Sep 12 23:55:09.198276 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.198276 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:55:09.198276 amazon-ssm-agent[2099]: 2025/09/12 23:55:09 processing appconfig overrides Sep 12 23:55:09.230879 polkitd[2153]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 23:55:09.231042 polkitd[2153]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 23:55:09.237275 polkitd[2153]: Finished loading, compiling and executing 2 rules Sep 12 23:55:09.251234 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 23:55:09.252295 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 23:55:09.256203 polkitd[2153]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 23:55:09.268606 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO https_proxy: Sep 12 23:55:09.340276 systemd-hostnamed[2021]: Hostname set to (transient) Sep 12 23:55:09.348544 systemd-resolved[1806]: System hostname changed to 'ip-172-31-20-162'. Sep 12 23:55:09.369236 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO http_proxy: Sep 12 23:55:09.398974 containerd[2027]: time="2025-09-12T23:55:09.398761620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:09.415146 containerd[2027]: time="2025-09-12T23:55:09.415046736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:09.415146 containerd[2027]: time="2025-09-12T23:55:09.415131564Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:55:09.415352 containerd[2027]: time="2025-09-12T23:55:09.415172664Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:55:09.416943 containerd[2027]: time="2025-09-12T23:55:09.416849388Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:55:09.416943 containerd[2027]: time="2025-09-12T23:55:09.416937444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:09.417196 containerd[2027]: time="2025-09-12T23:55:09.417132852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:09.417196 containerd[2027]: time="2025-09-12T23:55:09.417187380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:09.421719 containerd[2027]: time="2025-09-12T23:55:09.421603549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:09.421719 containerd[2027]: time="2025-09-12T23:55:09.421701133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:09.421920 containerd[2027]: time="2025-09-12T23:55:09.421742941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:09.421920 containerd[2027]: time="2025-09-12T23:55:09.421775221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:09.424474 containerd[2027]: time="2025-09-12T23:55:09.422037397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:09.427818 containerd[2027]: time="2025-09-12T23:55:09.426386473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:55:09.432492 containerd[2027]: time="2025-09-12T23:55:09.430041949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:55:09.432492 containerd[2027]: time="2025-09-12T23:55:09.430170013Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:55:09.432921 containerd[2027]: time="2025-09-12T23:55:09.432814765Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:55:09.434020 containerd[2027]: time="2025-09-12T23:55:09.433949449Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:55:09.454638 containerd[2027]: time="2025-09-12T23:55:09.454559857Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:55:09.454934 containerd[2027]: time="2025-09-12T23:55:09.454686661Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:55:09.454934 containerd[2027]: time="2025-09-12T23:55:09.454727449Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:55:09.454934 containerd[2027]: time="2025-09-12T23:55:09.454774057Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:55:09.454934 containerd[2027]: time="2025-09-12T23:55:09.454808893Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:55:09.455113 containerd[2027]: time="2025-09-12T23:55:09.455085541Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459067981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459433741Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459529537Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459573877Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459623161Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459658609Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459702625Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459749665Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459796849Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459839821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459880489Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459921793Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.459977389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.462504 containerd[2027]: time="2025-09-12T23:55:09.460016269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.463248 containerd[2027]: time="2025-09-12T23:55:09.460072801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.463248 containerd[2027]: time="2025-09-12T23:55:09.460118161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.463248 containerd[2027]: time="2025-09-12T23:55:09.460186093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.463248 containerd[2027]: time="2025-09-12T23:55:09.460234345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.463248 containerd[2027]: time="2025-09-12T23:55:09.460281877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.463248 containerd[2027]: time="2025-09-12T23:55:09.460318093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.463248 containerd[2027]: time="2025-09-12T23:55:09.460366537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.460416205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.467594641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.467654245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.467699593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.467753737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.467817781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.467858845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.467889709Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.468175801Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.468234373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.468273505Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.468317869Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:55:09.469476 containerd[2027]: time="2025-09-12T23:55:09.468355561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.470106 containerd[2027]: time="2025-09-12T23:55:09.468393925Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:55:09.470106 containerd[2027]: time="2025-09-12T23:55:09.468434341Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:55:09.470106 containerd[2027]: time="2025-09-12T23:55:09.468503257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:55:09.470233 containerd[2027]: time="2025-09-12T23:55:09.469285357Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:55:09.470499 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO no_proxy: Sep 12 23:55:09.476513 containerd[2027]: time="2025-09-12T23:55:09.469422589Z" level=info msg="Connect containerd service" Sep 12 23:55:09.480398 containerd[2027]: time="2025-09-12T23:55:09.478951789Z" level=info msg="using legacy CRI server" Sep 12 23:55:09.480398 containerd[2027]: time="2025-09-12T23:55:09.479011453Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:55:09.480398 containerd[2027]: time="2025-09-12T23:55:09.479191237Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:55:09.483020 containerd[2027]: time="2025-09-12T23:55:09.482953717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:55:09.485723 containerd[2027]: time="2025-09-12T23:55:09.485417305Z" level=info msg="Start subscribing containerd event" Sep 12 23:55:09.485723 containerd[2027]: time="2025-09-12T23:55:09.485579605Z" level=info msg="Start recovering state" Sep 12 23:55:09.485921 containerd[2027]: time="2025-09-12T23:55:09.485729113Z" level=info msg="Start event monitor" Sep 12 23:55:09.485921 containerd[2027]: time="2025-09-12T23:55:09.485783149Z" level=info msg="Start snapshots syncer" Sep 12 23:55:09.485921 containerd[2027]: time="2025-09-12T23:55:09.485809501Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:55:09.485921 containerd[2027]: time="2025-09-12T23:55:09.485828713Z" level=info msg="Start streaming server" Sep 12 23:55:09.494350 containerd[2027]: time="2025-09-12T23:55:09.493554421Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:55:09.494350 containerd[2027]: time="2025-09-12T23:55:09.493801021Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:55:09.494350 containerd[2027]: time="2025-09-12T23:55:09.493962445Z" level=info msg="containerd successfully booted in 0.466898s" Sep 12 23:55:09.494104 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:55:09.573302 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO Checking if agent identity type OnPrem can be assumed Sep 12 23:55:09.674595 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO Checking if agent identity type EC2 can be assumed Sep 12 23:55:09.773032 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO Agent will take identity from EC2 Sep 12 23:55:09.872656 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:55:09.974714 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:55:10.077480 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:55:10.174772 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 23:55:10.281530 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 23:55:10.380512 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 23:55:10.489570 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 23:55:10.514138 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [Registrar] Starting registrar module Sep 12 23:55:10.514138 amazon-ssm-agent[2099]: 2025-09-12 23:55:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 23:55:10.514138 amazon-ssm-agent[2099]: 2025-09-12 23:55:10 INFO [EC2Identity] EC2 registration was successful. Sep 12 23:55:10.514138 amazon-ssm-agent[2099]: 2025-09-12 23:55:10 INFO [CredentialRefresher] credentialRefresher has started Sep 12 23:55:10.514138 amazon-ssm-agent[2099]: 2025-09-12 23:55:10 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 23:55:10.514138 amazon-ssm-agent[2099]: 2025-09-12 23:55:10 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 23:55:10.591505 amazon-ssm-agent[2099]: 2025-09-12 23:55:10 INFO [CredentialRefresher] Next credential rotation will be in 32.441657853866666 minutes Sep 12 23:55:10.640770 tar[2002]: linux-arm64/README.md Sep 12 23:55:10.682631 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:55:10.695629 sshd_keygen[2018]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:55:10.756549 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:55:10.769051 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:55:10.795901 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:55:10.796642 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:55:10.809948 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:55:10.844350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:55:10.854990 ntpd[1990]: Listen normally on 6 eth0 [fe80::494:acff:fe7f:663d%2]:123 Sep 12 23:55:10.858033 ntpd[1990]: 12 Sep 23:55:10 ntpd[1990]: Listen normally on 6 eth0 [fe80::494:acff:fe7f:663d%2]:123 Sep 12 23:55:10.857134 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:55:10.874240 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:55:10.880786 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:55:11.157798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:11.162026 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:55:11.167659 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:55:11.169695 systemd[1]: Startup finished in 1.199s (kernel) + 9.168s (initrd) + 10.229s (userspace) = 20.598s. Sep 12 23:55:11.547479 amazon-ssm-agent[2099]: 2025-09-12 23:55:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 23:55:11.649616 amazon-ssm-agent[2099]: 2025-09-12 23:55:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2243) started Sep 12 23:55:11.752003 amazon-ssm-agent[2099]: 2025-09-12 23:55:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 23:55:12.073702 kubelet[2233]: E0912 23:55:12.073564 2233 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:55:12.078277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:55:12.078679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:55:12.079270 systemd[1]: kubelet.service: Consumed 1.493s CPU time. Sep 12 23:55:15.158768 systemd-resolved[1806]: Clock change detected. Flushing caches. Sep 12 23:55:17.682999 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:55:17.691077 systemd[1]: Started sshd@0-172.31.20.162:22-147.75.109.163:48814.service - OpenSSH per-connection server daemon (147.75.109.163:48814). Sep 12 23:55:17.871639 sshd[2256]: Accepted publickey for core from 147.75.109.163 port 48814 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:17.874807 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:17.894124 systemd-logind[1997]: New session 1 of user core. Sep 12 23:55:17.897449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:55:17.905371 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:55:17.940385 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:55:17.956336 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:55:17.963079 (systemd)[2260]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:55:18.195152 systemd[2260]: Queued start job for default target default.target. Sep 12 23:55:18.202782 systemd[2260]: Created slice app.slice - User Application Slice. Sep 12 23:55:18.202851 systemd[2260]: Reached target paths.target - Paths. Sep 12 23:55:18.202885 systemd[2260]: Reached target timers.target - Timers. Sep 12 23:55:18.205401 systemd[2260]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:55:18.234141 systemd[2260]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:55:18.234372 systemd[2260]: Reached target sockets.target - Sockets. Sep 12 23:55:18.234406 systemd[2260]: Reached target basic.target - Basic System. Sep 12 23:55:18.234492 systemd[2260]: Reached target default.target - Main User Target. Sep 12 23:55:18.234789 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:55:18.235196 systemd[2260]: Startup finished in 259ms. Sep 12 23:55:18.246855 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:55:18.402048 systemd[1]: Started sshd@1-172.31.20.162:22-147.75.109.163:48826.service - OpenSSH per-connection server daemon (147.75.109.163:48826). Sep 12 23:55:18.584060 sshd[2271]: Accepted publickey for core from 147.75.109.163 port 48826 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:18.586892 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:18.595876 systemd-logind[1997]: New session 2 of user core. Sep 12 23:55:18.607871 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:55:18.735887 sshd[2271]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:18.741872 systemd[1]: sshd@1-172.31.20.162:22-147.75.109.163:48826.service: Deactivated successfully. Sep 12 23:55:18.742009 systemd-logind[1997]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:55:18.746386 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:55:18.750643 systemd-logind[1997]: Removed session 2. Sep 12 23:55:18.772131 systemd[1]: Started sshd@2-172.31.20.162:22-147.75.109.163:48840.service - OpenSSH per-connection server daemon (147.75.109.163:48840). Sep 12 23:55:18.945424 sshd[2278]: Accepted publickey for core from 147.75.109.163 port 48840 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:18.948328 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:18.959671 systemd-logind[1997]: New session 3 of user core. Sep 12 23:55:18.966910 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:55:19.085112 sshd[2278]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:19.090175 systemd[1]: sshd@2-172.31.20.162:22-147.75.109.163:48840.service: Deactivated successfully. Sep 12 23:55:19.094975 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:55:19.097989 systemd-logind[1997]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:55:19.100412 systemd-logind[1997]: Removed session 3. Sep 12 23:55:19.128144 systemd[1]: Started sshd@3-172.31.20.162:22-147.75.109.163:48844.service - OpenSSH per-connection server daemon (147.75.109.163:48844). Sep 12 23:55:19.301277 sshd[2285]: Accepted publickey for core from 147.75.109.163 port 48844 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:19.303408 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:19.313237 systemd-logind[1997]: New session 4 of user core. Sep 12 23:55:19.324872 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:55:19.452785 sshd[2285]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:19.458758 systemd[1]: sshd@3-172.31.20.162:22-147.75.109.163:48844.service: Deactivated successfully. Sep 12 23:55:19.462327 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:55:19.464879 systemd-logind[1997]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:55:19.466949 systemd-logind[1997]: Removed session 4. Sep 12 23:55:19.498275 systemd[1]: Started sshd@4-172.31.20.162:22-147.75.109.163:48854.service - OpenSSH per-connection server daemon (147.75.109.163:48854). Sep 12 23:55:19.664893 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 48854 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:19.667000 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:19.675313 systemd-logind[1997]: New session 5 of user core. Sep 12 23:55:19.684937 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:55:19.810810 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:55:19.811506 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:19.828481 sudo[2295]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:19.852606 sshd[2292]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:19.860811 systemd-logind[1997]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:55:19.861356 systemd[1]: sshd@4-172.31.20.162:22-147.75.109.163:48854.service: Deactivated successfully. Sep 12 23:55:19.865366 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:55:19.867983 systemd-logind[1997]: Removed session 5. Sep 12 23:55:19.893134 systemd[1]: Started sshd@5-172.31.20.162:22-147.75.109.163:48862.service - OpenSSH per-connection server daemon (147.75.109.163:48862). Sep 12 23:55:20.076745 sshd[2300]: Accepted publickey for core from 147.75.109.163 port 48862 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:20.078491 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:20.087688 systemd-logind[1997]: New session 6 of user core. Sep 12 23:55:20.095895 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:55:20.203002 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:55:20.204293 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:20.211699 sudo[2304]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:20.222804 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:55:20.223534 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:20.252886 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:20.255464 auditctl[2307]: No rules Sep 12 23:55:20.256255 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:55:20.256699 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:20.261836 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:20.325905 augenrules[2325]: No rules Sep 12 23:55:20.329728 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:20.332276 sudo[2303]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:20.356135 sshd[2300]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:20.363028 systemd[1]: sshd@5-172.31.20.162:22-147.75.109.163:48862.service: Deactivated successfully. Sep 12 23:55:20.366252 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:55:20.367501 systemd-logind[1997]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:55:20.369437 systemd-logind[1997]: Removed session 6. Sep 12 23:55:20.398014 systemd[1]: Started sshd@6-172.31.20.162:22-147.75.109.163:57482.service - OpenSSH per-connection server daemon (147.75.109.163:57482). Sep 12 23:55:20.563587 sshd[2333]: Accepted publickey for core from 147.75.109.163 port 57482 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:20.566836 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:20.575232 systemd-logind[1997]: New session 7 of user core. Sep 12 23:55:20.585871 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:55:20.689110 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:55:20.689887 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:21.193080 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:55:21.202188 (dockerd)[2351]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:55:21.630900 dockerd[2351]: time="2025-09-12T23:55:21.630700544Z" level=info msg="Starting up" Sep 12 23:55:21.766163 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1212368472-merged.mount: Deactivated successfully. Sep 12 23:55:21.805314 dockerd[2351]: time="2025-09-12T23:55:21.805185105Z" level=info msg="Loading containers: start." Sep 12 23:55:21.979800 kernel: Initializing XFRM netlink socket Sep 12 23:55:22.010932 (udev-worker)[2373]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:22.106397 systemd-networkd[1851]: docker0: Link UP Sep 12 23:55:22.132036 dockerd[2351]: time="2025-09-12T23:55:22.130896006Z" level=info msg="Loading containers: done." Sep 12 23:55:22.155134 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3776499125-merged.mount: Deactivated successfully. Sep 12 23:55:22.163518 dockerd[2351]: time="2025-09-12T23:55:22.163345578Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:55:22.164509 dockerd[2351]: time="2025-09-12T23:55:22.163847526Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:55:22.164509 dockerd[2351]: time="2025-09-12T23:55:22.164096442Z" level=info msg="Daemon has completed initialization" Sep 12 23:55:22.239701 dockerd[2351]: time="2025-09-12T23:55:22.239342155Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:55:22.240691 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:55:22.630116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:55:22.640970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:23.101865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:23.120184 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:55:23.200064 kubelet[2498]: E0912 23:55:23.199896 2498 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:55:23.206803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:55:23.207138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:55:23.428748 containerd[2027]: time="2025-09-12T23:55:23.428579541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 23:55:24.127288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811665327.mount: Deactivated successfully. Sep 12 23:55:25.720678 containerd[2027]: time="2025-09-12T23:55:25.720592332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:25.723411 containerd[2027]: time="2025-09-12T23:55:25.723341100Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 12 23:55:25.726148 containerd[2027]: time="2025-09-12T23:55:25.726070668Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:25.732861 containerd[2027]: time="2025-09-12T23:55:25.732758328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:25.735291 containerd[2027]: time="2025-09-12T23:55:25.734774412Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.306120243s" Sep 12 23:55:25.735291 containerd[2027]: time="2025-09-12T23:55:25.734856516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 23:55:25.735991 containerd[2027]: time="2025-09-12T23:55:25.735943920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 23:55:27.202895 containerd[2027]: time="2025-09-12T23:55:27.202815995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.204760 containerd[2027]: time="2025-09-12T23:55:27.204619427Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 12 23:55:27.207595 containerd[2027]: time="2025-09-12T23:55:27.205638503Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.212516 containerd[2027]: time="2025-09-12T23:55:27.212431787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.215586 containerd[2027]: time="2025-09-12T23:55:27.215487563Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.478889067s" Sep 12 23:55:27.215786 containerd[2027]: time="2025-09-12T23:55:27.215589455Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 23:55:27.216526 containerd[2027]: time="2025-09-12T23:55:27.216452783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 23:55:28.404384 containerd[2027]: time="2025-09-12T23:55:28.404288209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:28.406257 containerd[2027]: time="2025-09-12T23:55:28.405641077Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 12 23:55:28.407343 containerd[2027]: time="2025-09-12T23:55:28.407290393Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:28.415100 containerd[2027]: time="2025-09-12T23:55:28.415030681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:28.417835 containerd[2027]: time="2025-09-12T23:55:28.417732853Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.20120735s" Sep 12 23:55:28.417835 containerd[2027]: time="2025-09-12T23:55:28.417830761Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 23:55:28.418815 containerd[2027]: time="2025-09-12T23:55:28.418758241Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 23:55:29.725483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501648902.mount: Deactivated successfully. Sep 12 23:55:30.358247 containerd[2027]: time="2025-09-12T23:55:30.358182099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:30.361604 containerd[2027]: time="2025-09-12T23:55:30.360394959Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:30.361604 containerd[2027]: time="2025-09-12T23:55:30.360515595Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 12 23:55:30.364697 containerd[2027]: time="2025-09-12T23:55:30.364596687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:30.366620 containerd[2027]: time="2025-09-12T23:55:30.366213207Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.947390502s" Sep 12 23:55:30.366620 containerd[2027]: time="2025-09-12T23:55:30.366285471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 23:55:30.367058 containerd[2027]: time="2025-09-12T23:55:30.366977043Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:55:30.895363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363474794.mount: Deactivated successfully. Sep 12 23:55:32.184616 containerd[2027]: time="2025-09-12T23:55:32.184349392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.187012 containerd[2027]: time="2025-09-12T23:55:32.186787240Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 23:55:32.189257 containerd[2027]: time="2025-09-12T23:55:32.189160408Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.196699 containerd[2027]: time="2025-09-12T23:55:32.195634276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.198590 containerd[2027]: time="2025-09-12T23:55:32.198295072Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.831255425s" Sep 12 23:55:32.198590 containerd[2027]: time="2025-09-12T23:55:32.198363208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 23:55:32.200228 containerd[2027]: time="2025-09-12T23:55:32.200176396Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:55:32.693729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508630152.mount: Deactivated successfully. Sep 12 23:55:32.706648 containerd[2027]: time="2025-09-12T23:55:32.706537363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.709080 containerd[2027]: time="2025-09-12T23:55:32.708670555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 23:55:32.711308 containerd[2027]: time="2025-09-12T23:55:32.711214399Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.716642 containerd[2027]: time="2025-09-12T23:55:32.716504299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:32.718595 containerd[2027]: time="2025-09-12T23:55:32.718164235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 517.924863ms" Sep 12 23:55:32.718595 containerd[2027]: time="2025-09-12T23:55:32.718224031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:55:32.719136 containerd[2027]: time="2025-09-12T23:55:32.719057875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 23:55:33.274181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:55:33.285062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:33.323806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488006016.mount: Deactivated successfully. Sep 12 23:55:33.959896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:33.970124 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:55:34.092504 kubelet[2655]: E0912 23:55:34.092083 2655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:55:34.100760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:55:34.101136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:55:36.382619 containerd[2027]: time="2025-09-12T23:55:36.381684957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:36.411554 containerd[2027]: time="2025-09-12T23:55:36.411467337Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 12 23:55:36.450524 containerd[2027]: time="2025-09-12T23:55:36.450434169Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:36.494008 containerd[2027]: time="2025-09-12T23:55:36.493897077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:36.497459 containerd[2027]: time="2025-09-12T23:55:36.496662105Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.777540138s" Sep 12 23:55:36.497459 containerd[2027]: time="2025-09-12T23:55:36.496729365Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 23:55:39.679405 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 23:55:44.351506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:55:44.362799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:44.433186 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:55:44.433430 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:55:44.435407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:44.450947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:44.515151 systemd[1]: Reloading requested from client PID 2740 ('systemctl') (unit session-7.scope)... Sep 12 23:55:44.515183 systemd[1]: Reloading... Sep 12 23:55:44.776580 zram_generator::config[2783]: No configuration found. Sep 12 23:55:45.030153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:45.216251 systemd[1]: Reloading finished in 700 ms. Sep 12 23:55:45.307147 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:55:45.307350 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:55:45.307892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:45.315160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:45.663803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:45.688484 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:55:45.764795 kubelet[2843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:45.764795 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:55:45.764795 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:45.765407 kubelet[2843]: I0912 23:55:45.764890 2843 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:55:46.418584 kubelet[2843]: I0912 23:55:46.417938 2843 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:55:46.418584 kubelet[2843]: I0912 23:55:46.417987 2843 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:55:46.418584 kubelet[2843]: I0912 23:55:46.418439 2843 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:55:46.463688 kubelet[2843]: E0912 23:55:46.463613 2843 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:46.467599 kubelet[2843]: I0912 23:55:46.467510 2843 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:55:46.481049 kubelet[2843]: E0912 23:55:46.480956 2843 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:55:46.481049 kubelet[2843]: I0912 23:55:46.481037 2843 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:55:46.489696 kubelet[2843]: I0912 23:55:46.489654 2843 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:55:46.491584 kubelet[2843]: I0912 23:55:46.490296 2843 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:55:46.491584 kubelet[2843]: I0912 23:55:46.490344 2843 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-162","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:55:46.491584 kubelet[2843]: I0912 23:55:46.490883 2843 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:55:46.491584 kubelet[2843]: I0912 23:55:46.490909 2843 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:55:46.492062 kubelet[2843]: I0912 23:55:46.491276 2843 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:46.497920 kubelet[2843]: I0912 23:55:46.497872 2843 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:55:46.498140 kubelet[2843]: I0912 23:55:46.498117 2843 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:55:46.498259 kubelet[2843]: I0912 23:55:46.498240 2843 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:55:46.498372 kubelet[2843]: I0912 23:55:46.498351 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:55:46.506143 kubelet[2843]: W0912 23:55:46.506040 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-162&limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:46.506299 kubelet[2843]: E0912 23:55:46.506151 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-162&limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:46.507480 kubelet[2843]: W0912 23:55:46.507377 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:46.507680 kubelet[2843]: E0912 23:55:46.507484 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:46.507741 kubelet[2843]: I0912 23:55:46.507701 2843 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:55:46.508873 kubelet[2843]: I0912 23:55:46.508807 2843 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:55:46.509262 kubelet[2843]: W0912 23:55:46.509096 2843 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:55:46.511718 kubelet[2843]: I0912 23:55:46.511415 2843 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:55:46.511718 kubelet[2843]: I0912 23:55:46.511482 2843 server.go:1287] "Started kubelet" Sep 12 23:55:46.521193 kubelet[2843]: E0912 23:55:46.520472 2843 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.162:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-162.1864ae4078efb57b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-162,UID:ip-172-31-20-162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-162,},FirstTimestamp:2025-09-12 23:55:46.511451515 +0000 UTC m=+0.815959049,LastTimestamp:2025-09-12 23:55:46.511451515 +0000 UTC m=+0.815959049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-162,}" Sep 12 23:55:46.521428 kubelet[2843]: I0912 23:55:46.521250 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:55:46.523599 kubelet[2843]: I0912 23:55:46.522137 2843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:55:46.523599 kubelet[2843]: I0912 23:55:46.522718 2843 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:55:46.523599 kubelet[2843]: I0912 23:55:46.522854 2843 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:55:46.525129 kubelet[2843]: I0912 23:55:46.525079 2843 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:55:46.531275 kubelet[2843]: I0912 23:55:46.531219 2843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:55:46.536265 kubelet[2843]: I0912 23:55:46.536210 2843 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:55:46.536837 kubelet[2843]: E0912 23:55:46.536785 2843 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-162\" not found" Sep 12 23:55:46.537953 kubelet[2843]: I0912 23:55:46.537883 2843 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:55:46.538116 kubelet[2843]: I0912 23:55:46.538014 2843 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:55:46.541071 kubelet[2843]: W0912 23:55:46.540932 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:46.541221 kubelet[2843]: E0912 23:55:46.541106 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:46.541425 kubelet[2843]: E0912 23:55:46.541341 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-162?timeout=10s\": dial tcp 172.31.20.162:6443: connect: connection refused" interval="200ms" Sep 12 23:55:46.542603 kubelet[2843]: I0912 23:55:46.542458 2843 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:55:46.542964 kubelet[2843]: I0912 23:55:46.542860 2843 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:55:46.546975 kubelet[2843]: I0912 23:55:46.546922 2843 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:55:46.557305 kubelet[2843]: E0912 23:55:46.557231 2843 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:55:46.571152 kubelet[2843]: I0912 23:55:46.571038 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:55:46.573501 kubelet[2843]: I0912 23:55:46.573419 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:55:46.573501 kubelet[2843]: I0912 23:55:46.573476 2843 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:55:46.573501 kubelet[2843]: I0912 23:55:46.573510 2843 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:55:46.573828 kubelet[2843]: I0912 23:55:46.573567 2843 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:55:46.573828 kubelet[2843]: E0912 23:55:46.573676 2843 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:55:46.589677 kubelet[2843]: W0912 23:55:46.589101 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:46.589677 kubelet[2843]: E0912 23:55:46.589207 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:46.601957 kubelet[2843]: I0912 23:55:46.601637 2843 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:55:46.601957 kubelet[2843]: I0912 23:55:46.601668 2843 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:55:46.601957 kubelet[2843]: I0912 23:55:46.601719 2843 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:46.609621 kubelet[2843]: I0912 23:55:46.609414 2843 policy_none.go:49] "None policy: Start" Sep 12 23:55:46.609621 kubelet[2843]: I0912 23:55:46.609459 2843 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:55:46.609621 kubelet[2843]: I0912 23:55:46.609505 2843 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:55:46.623834 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:55:46.637422 kubelet[2843]: E0912 23:55:46.637347 2843 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-162\" not found" Sep 12 23:55:46.647034 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:55:46.657571 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:55:46.669536 kubelet[2843]: I0912 23:55:46.668979 2843 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:55:46.669536 kubelet[2843]: I0912 23:55:46.669332 2843 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:55:46.669536 kubelet[2843]: I0912 23:55:46.669359 2843 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:55:46.671816 kubelet[2843]: I0912 23:55:46.671661 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:55:46.683089 kubelet[2843]: E0912 23:55:46.682987 2843 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:55:46.683529 kubelet[2843]: E0912 23:55:46.683369 2843 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-162\" not found" Sep 12 23:55:46.716200 systemd[1]: Created slice kubepods-burstable-pod60e2efceaabe01274b1274b27ed1affb.slice - libcontainer container kubepods-burstable-pod60e2efceaabe01274b1274b27ed1affb.slice. Sep 12 23:55:46.740647 kubelet[2843]: E0912 23:55:46.740426 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:46.743248 kubelet[2843]: E0912 23:55:46.743139 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-162?timeout=10s\": dial tcp 172.31.20.162:6443: connect: connection refused" interval="400ms" Sep 12 23:55:46.750482 systemd[1]: Created slice kubepods-burstable-pod2921e02f1b78dd95206912ff89004a1e.slice - libcontainer container kubepods-burstable-pod2921e02f1b78dd95206912ff89004a1e.slice. Sep 12 23:55:46.770003 kubelet[2843]: E0912 23:55:46.769957 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:46.773086 kubelet[2843]: I0912 23:55:46.773039 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-162" Sep 12 23:55:46.776425 kubelet[2843]: E0912 23:55:46.776362 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.162:6443/api/v1/nodes\": dial tcp 172.31.20.162:6443: connect: connection refused" node="ip-172-31-20-162" Sep 12 23:55:46.783312 systemd[1]: Created slice kubepods-burstable-pod666391cb9e44ab4667bc4a1b9c30c8dc.slice - libcontainer container kubepods-burstable-pod666391cb9e44ab4667bc4a1b9c30c8dc.slice. Sep 12 23:55:46.787531 kubelet[2843]: E0912 23:55:46.787479 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:46.840696 kubelet[2843]: I0912 23:55:46.840575 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/666391cb9e44ab4667bc4a1b9c30c8dc-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-162\" (UID: \"666391cb9e44ab4667bc4a1b9c30c8dc\") " pod="kube-system/kube-scheduler-ip-172-31-20-162" Sep 12 23:55:46.840696 kubelet[2843]: I0912 23:55:46.840648 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60e2efceaabe01274b1274b27ed1affb-ca-certs\") pod \"kube-apiserver-ip-172-31-20-162\" (UID: \"60e2efceaabe01274b1274b27ed1affb\") " pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:46.840696 kubelet[2843]: I0912 23:55:46.840689 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:46.841209 kubelet[2843]: I0912 23:55:46.840730 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:46.841209 kubelet[2843]: I0912 23:55:46.840769 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:46.841209 kubelet[2843]: I0912 23:55:46.840807 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:46.841209 kubelet[2843]: I0912 23:55:46.840848 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60e2efceaabe01274b1274b27ed1affb-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-162\" (UID: \"60e2efceaabe01274b1274b27ed1affb\") " pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:46.841209 kubelet[2843]: I0912 23:55:46.840906 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60e2efceaabe01274b1274b27ed1affb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-162\" (UID: \"60e2efceaabe01274b1274b27ed1affb\") " pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:46.841467 kubelet[2843]: I0912 23:55:46.840946 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:46.979583 kubelet[2843]: I0912 23:55:46.979400 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-162" Sep 12 23:55:46.981062 kubelet[2843]: E0912 23:55:46.980985 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.162:6443/api/v1/nodes\": dial tcp 172.31.20.162:6443: connect: connection refused" node="ip-172-31-20-162" Sep 12 23:55:47.043246 containerd[2027]: time="2025-09-12T23:55:47.043039518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-162,Uid:60e2efceaabe01274b1274b27ed1affb,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:47.075427 containerd[2027]: time="2025-09-12T23:55:47.075313290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-162,Uid:2921e02f1b78dd95206912ff89004a1e,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:47.089486 containerd[2027]: time="2025-09-12T23:55:47.089355546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-162,Uid:666391cb9e44ab4667bc4a1b9c30c8dc,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:47.144956 kubelet[2843]: E0912 23:55:47.144879 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-162?timeout=10s\": dial tcp 172.31.20.162:6443: connect: connection refused" interval="800ms" Sep 12 23:55:47.384359 kubelet[2843]: I0912 23:55:47.383712 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-162" Sep 12 23:55:47.384359 kubelet[2843]: E0912 23:55:47.384272 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.162:6443/api/v1/nodes\": dial tcp 172.31.20.162:6443: connect: connection refused" node="ip-172-31-20-162" Sep 12 23:55:47.424274 kubelet[2843]: W0912 23:55:47.424226 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:47.424482 kubelet[2843]: E0912 23:55:47.424450 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:47.478696 kubelet[2843]: W0912 23:55:47.478484 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:47.478696 kubelet[2843]: E0912 23:55:47.478629 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:47.574465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104232124.mount: Deactivated successfully. Sep 12 23:55:47.591977 containerd[2027]: time="2025-09-12T23:55:47.591679821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:47.594373 containerd[2027]: time="2025-09-12T23:55:47.594272145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 23:55:47.597145 containerd[2027]: time="2025-09-12T23:55:47.596130249Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:47.599581 containerd[2027]: time="2025-09-12T23:55:47.599170785Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:47.602272 containerd[2027]: time="2025-09-12T23:55:47.602187657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:47.603815 containerd[2027]: time="2025-09-12T23:55:47.603739641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:55:47.605200 containerd[2027]: time="2025-09-12T23:55:47.605055213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:55:47.608570 containerd[2027]: time="2025-09-12T23:55:47.608430309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:47.614621 containerd[2027]: time="2025-09-12T23:55:47.613673589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.512127ms" Sep 12 23:55:47.618683 containerd[2027]: time="2025-09-12T23:55:47.618591309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.122103ms" Sep 12 23:55:47.623609 containerd[2027]: time="2025-09-12T23:55:47.623247837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.779695ms" Sep 12 23:55:47.656898 kubelet[2843]: W0912 23:55:47.656649 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-162&limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:47.656898 kubelet[2843]: E0912 23:55:47.656768 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-162&limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:47.879050 containerd[2027]: time="2025-09-12T23:55:47.878517994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:47.879050 containerd[2027]: time="2025-09-12T23:55:47.878796802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:47.882580 containerd[2027]: time="2025-09-12T23:55:47.879652018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:47.883472 containerd[2027]: time="2025-09-12T23:55:47.883230574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:47.889998 containerd[2027]: time="2025-09-12T23:55:47.889812766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:47.890397 containerd[2027]: time="2025-09-12T23:55:47.890064850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:47.890397 containerd[2027]: time="2025-09-12T23:55:47.890116858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:47.890682 containerd[2027]: time="2025-09-12T23:55:47.890494978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:47.900764 containerd[2027]: time="2025-09-12T23:55:47.900187006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:47.900764 containerd[2027]: time="2025-09-12T23:55:47.900314122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:47.900764 containerd[2027]: time="2025-09-12T23:55:47.900356854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:47.900764 containerd[2027]: time="2025-09-12T23:55:47.900596398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:47.917725 kubelet[2843]: W0912 23:55:47.916726 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.162:6443: connect: connection refused Sep 12 23:55:47.917725 kubelet[2843]: E0912 23:55:47.916845 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:47.946120 kubelet[2843]: E0912 23:55:47.946013 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-162?timeout=10s\": dial tcp 172.31.20.162:6443: connect: connection refused" interval="1.6s" Sep 12 23:55:47.951929 systemd[1]: Started cri-containerd-44352076bd3e89d4fa68eeca30914a7886ee01e55e601d8ca826f29acd44622b.scope - libcontainer container 44352076bd3e89d4fa68eeca30914a7886ee01e55e601d8ca826f29acd44622b. Sep 12 23:55:47.965895 systemd[1]: Started cri-containerd-80e8427b77efbc51af9cb574293c3e49eb52f840090e55c5c348743dabc81bd9.scope - libcontainer container 80e8427b77efbc51af9cb574293c3e49eb52f840090e55c5c348743dabc81bd9. Sep 12 23:55:47.989970 systemd[1]: Started cri-containerd-06f53ec297e4abf438dafbea7d2678b9c0c76ea8dfe0261c74f722f82f37db58.scope - libcontainer container 06f53ec297e4abf438dafbea7d2678b9c0c76ea8dfe0261c74f722f82f37db58. Sep 12 23:55:48.090735 containerd[2027]: time="2025-09-12T23:55:48.090188755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-162,Uid:2921e02f1b78dd95206912ff89004a1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"06f53ec297e4abf438dafbea7d2678b9c0c76ea8dfe0261c74f722f82f37db58\"" Sep 12 23:55:48.113850 containerd[2027]: time="2025-09-12T23:55:48.112974343Z" level=info msg="CreateContainer within sandbox \"06f53ec297e4abf438dafbea7d2678b9c0c76ea8dfe0261c74f722f82f37db58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:55:48.132944 containerd[2027]: time="2025-09-12T23:55:48.132374323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-162,Uid:666391cb9e44ab4667bc4a1b9c30c8dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"44352076bd3e89d4fa68eeca30914a7886ee01e55e601d8ca826f29acd44622b\"" Sep 12 23:55:48.141324 containerd[2027]: time="2025-09-12T23:55:48.139991491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-162,Uid:60e2efceaabe01274b1274b27ed1affb,Namespace:kube-system,Attempt:0,} returns sandbox id \"80e8427b77efbc51af9cb574293c3e49eb52f840090e55c5c348743dabc81bd9\"" Sep 12 23:55:48.145212 containerd[2027]: time="2025-09-12T23:55:48.145144123Z" level=info msg="CreateContainer within sandbox \"44352076bd3e89d4fa68eeca30914a7886ee01e55e601d8ca826f29acd44622b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:55:48.152892 containerd[2027]: time="2025-09-12T23:55:48.152816659Z" level=info msg="CreateContainer within sandbox \"80e8427b77efbc51af9cb574293c3e49eb52f840090e55c5c348743dabc81bd9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:55:48.166782 containerd[2027]: time="2025-09-12T23:55:48.166709719Z" level=info msg="CreateContainer within sandbox \"06f53ec297e4abf438dafbea7d2678b9c0c76ea8dfe0261c74f722f82f37db58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c\"" Sep 12 23:55:48.167932 containerd[2027]: time="2025-09-12T23:55:48.167746363Z" level=info msg="StartContainer for \"e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c\"" Sep 12 23:55:48.189597 kubelet[2843]: I0912 23:55:48.189341 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-162" Sep 12 23:55:48.190982 kubelet[2843]: E0912 23:55:48.190822 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.162:6443/api/v1/nodes\": dial tcp 172.31.20.162:6443: connect: connection refused" node="ip-172-31-20-162" Sep 12 23:55:48.202166 containerd[2027]: time="2025-09-12T23:55:48.201987788Z" level=info msg="CreateContainer within sandbox \"44352076bd3e89d4fa68eeca30914a7886ee01e55e601d8ca826f29acd44622b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb\"" Sep 12 23:55:48.204702 containerd[2027]: time="2025-09-12T23:55:48.202967132Z" level=info msg="StartContainer for \"63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb\"" Sep 12 23:55:48.210939 containerd[2027]: time="2025-09-12T23:55:48.210859040Z" level=info msg="CreateContainer within sandbox \"80e8427b77efbc51af9cb574293c3e49eb52f840090e55c5c348743dabc81bd9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb2ec2a5f2853d4cd4daed8d59ff2d431e601de98e46d1798d71efa297a6983f\"" Sep 12 23:55:48.211869 containerd[2027]: time="2025-09-12T23:55:48.211621100Z" level=info msg="StartContainer for \"eb2ec2a5f2853d4cd4daed8d59ff2d431e601de98e46d1798d71efa297a6983f\"" Sep 12 23:55:48.234299 systemd[1]: Started cri-containerd-e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c.scope - libcontainer container e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c. Sep 12 23:55:48.294794 systemd[1]: Started cri-containerd-63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb.scope - libcontainer container 63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb. Sep 12 23:55:48.324813 systemd[1]: Started cri-containerd-eb2ec2a5f2853d4cd4daed8d59ff2d431e601de98e46d1798d71efa297a6983f.scope - libcontainer container eb2ec2a5f2853d4cd4daed8d59ff2d431e601de98e46d1798d71efa297a6983f. Sep 12 23:55:48.374770 containerd[2027]: time="2025-09-12T23:55:48.374703656Z" level=info msg="StartContainer for \"e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c\" returns successfully" Sep 12 23:55:48.435911 containerd[2027]: time="2025-09-12T23:55:48.434918685Z" level=info msg="StartContainer for \"eb2ec2a5f2853d4cd4daed8d59ff2d431e601de98e46d1798d71efa297a6983f\" returns successfully" Sep 12 23:55:48.503592 containerd[2027]: time="2025-09-12T23:55:48.503199693Z" level=info msg="StartContainer for \"63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb\" returns successfully" Sep 12 23:55:48.573161 kubelet[2843]: E0912 23:55:48.573088 2843 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:55:48.613622 kubelet[2843]: E0912 23:55:48.611781 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:48.621591 kubelet[2843]: E0912 23:55:48.620185 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:48.630625 kubelet[2843]: E0912 23:55:48.628111 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:49.628769 kubelet[2843]: E0912 23:55:49.628650 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:49.633062 kubelet[2843]: E0912 23:55:49.632753 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:49.793958 kubelet[2843]: I0912 23:55:49.793282 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-162" Sep 12 23:55:50.155512 kubelet[2843]: E0912 23:55:50.155429 2843 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:52.487363 kubelet[2843]: E0912 23:55:52.487297 2843 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-162\" not found" node="ip-172-31-20-162" Sep 12 23:55:52.510906 kubelet[2843]: I0912 23:55:52.510580 2843 apiserver.go:52] "Watching apiserver" Sep 12 23:55:52.538809 kubelet[2843]: I0912 23:55:52.538754 2843 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:55:52.539789 kubelet[2843]: E0912 23:55:52.539386 2843 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-162.1864ae4078efb57b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-162,UID:ip-172-31-20-162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-162,},FirstTimestamp:2025-09-12 23:55:46.511451515 +0000 UTC m=+0.815959049,LastTimestamp:2025-09-12 23:55:46.511451515 +0000 UTC m=+0.815959049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-162,}" Sep 12 23:55:52.588741 kubelet[2843]: I0912 23:55:52.587668 2843 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-162" Sep 12 23:55:52.588741 kubelet[2843]: E0912 23:55:52.587731 2843 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-162\": node \"ip-172-31-20-162\" not found" Sep 12 23:55:52.631495 kubelet[2843]: E0912 23:55:52.631096 2843 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-162.1864ae407ba9dc5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-162,UID:ip-172-31-20-162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-20-162,},FirstTimestamp:2025-09-12 23:55:46.557205595 +0000 UTC m=+0.861713153,LastTimestamp:2025-09-12 23:55:46.557205595 +0000 UTC m=+0.861713153,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-162,}" Sep 12 23:55:52.639606 kubelet[2843]: I0912 23:55:52.638656 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:52.660678 kubelet[2843]: E0912 23:55:52.658863 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:52.661022 kubelet[2843]: I0912 23:55:52.660647 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-162" Sep 12 23:55:52.667018 kubelet[2843]: E0912 23:55:52.666956 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-162" Sep 12 23:55:52.667018 kubelet[2843]: I0912 23:55:52.667005 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:52.670694 kubelet[2843]: E0912 23:55:52.670638 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:52.675273 kubelet[2843]: I0912 23:55:52.675227 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:52.679725 kubelet[2843]: E0912 23:55:52.679416 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:53.876768 update_engine[1998]: I20250912 23:55:53.875634 1998 update_attempter.cc:509] Updating boot flags... Sep 12 23:55:54.022741 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3133) Sep 12 23:55:54.428862 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3134) Sep 12 23:55:54.744082 systemd[1]: Reloading requested from client PID 3302 ('systemctl') (unit session-7.scope)... Sep 12 23:55:54.744932 systemd[1]: Reloading... Sep 12 23:55:55.100638 zram_generator::config[3345]: No configuration found. Sep 12 23:55:55.266536 kubelet[2843]: I0912 23:55:55.266470 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-162" Sep 12 23:55:55.409962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:55.646787 systemd[1]: Reloading finished in 901 ms. Sep 12 23:55:55.748909 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:55.766524 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:55:55.767299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:55.767415 systemd[1]: kubelet.service: Consumed 1.685s CPU time, 130.5M memory peak, 0B memory swap peak. Sep 12 23:55:55.775210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:56.143822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:56.159975 (kubelet)[3403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:55:56.266671 kubelet[3403]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:56.266671 kubelet[3403]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:55:56.266671 kubelet[3403]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:56.270611 kubelet[3403]: I0912 23:55:56.267387 3403 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:55:56.289470 kubelet[3403]: I0912 23:55:56.289423 3403 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:55:56.289693 kubelet[3403]: I0912 23:55:56.289671 3403 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:55:56.290883 kubelet[3403]: I0912 23:55:56.290843 3403 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:55:56.297691 kubelet[3403]: I0912 23:55:56.297627 3403 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:55:56.304118 kubelet[3403]: I0912 23:55:56.304070 3403 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:55:56.309975 kubelet[3403]: E0912 23:55:56.309914 3403 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:55:56.309975 kubelet[3403]: I0912 23:55:56.309969 3403 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:55:56.318445 kubelet[3403]: I0912 23:55:56.315684 3403 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:55:56.318445 kubelet[3403]: I0912 23:55:56.316083 3403 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:55:56.318445 kubelet[3403]: I0912 23:55:56.316133 3403 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-162","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:55:56.318445 kubelet[3403]: I0912 23:55:56.316433 3403 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:55:56.319603 kubelet[3403]: I0912 23:55:56.316454 3403 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:55:56.319603 kubelet[3403]: I0912 23:55:56.316535 3403 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:56.319603 kubelet[3403]: I0912 23:55:56.316786 3403 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:55:56.319603 kubelet[3403]: I0912 23:55:56.316809 3403 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:55:56.319603 kubelet[3403]: I0912 23:55:56.316841 3403 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:55:56.319603 kubelet[3403]: I0912 23:55:56.316860 3403 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:55:56.322582 kubelet[3403]: I0912 23:55:56.322525 3403 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:55:56.325172 kubelet[3403]: I0912 23:55:56.325083 3403 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:55:56.326187 kubelet[3403]: I0912 23:55:56.326154 3403 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:55:56.326391 kubelet[3403]: I0912 23:55:56.326371 3403 server.go:1287] "Started kubelet" Sep 12 23:55:56.333536 kubelet[3403]: I0912 23:55:56.333503 3403 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:55:56.345863 kubelet[3403]: I0912 23:55:56.344389 3403 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:55:56.347861 kubelet[3403]: I0912 23:55:56.346737 3403 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:55:56.372412 kubelet[3403]: I0912 23:55:56.370893 3403 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:55:56.373447 kubelet[3403]: I0912 23:55:56.373405 3403 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:55:56.380151 kubelet[3403]: I0912 23:55:56.379818 3403 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:55:56.386671 kubelet[3403]: I0912 23:55:56.386583 3403 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:55:56.387077 kubelet[3403]: E0912 23:55:56.387003 3403 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-162\" not found" Sep 12 23:55:56.388053 kubelet[3403]: I0912 23:55:56.388011 3403 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:55:56.389134 kubelet[3403]: I0912 23:55:56.388278 3403 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:55:56.437394 kubelet[3403]: I0912 23:55:56.437158 3403 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:55:56.441733 kubelet[3403]: I0912 23:55:56.441661 3403 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:55:56.457157 kubelet[3403]: I0912 23:55:56.457088 3403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:55:56.462752 kubelet[3403]: I0912 23:55:56.460173 3403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:55:56.462752 kubelet[3403]: I0912 23:55:56.460222 3403 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:55:56.462752 kubelet[3403]: I0912 23:55:56.460261 3403 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:55:56.462752 kubelet[3403]: I0912 23:55:56.460277 3403 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:55:56.462752 kubelet[3403]: E0912 23:55:56.460345 3403 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:55:56.487076 kubelet[3403]: E0912 23:55:56.486045 3403 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:55:56.487076 kubelet[3403]: I0912 23:55:56.486486 3403 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:55:56.562799 kubelet[3403]: E0912 23:55:56.562743 3403 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:55:56.638581 kubelet[3403]: I0912 23:55:56.638520 3403 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:55:56.638798 kubelet[3403]: I0912 23:55:56.638774 3403 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:55:56.638924 kubelet[3403]: I0912 23:55:56.638904 3403 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:56.639288 kubelet[3403]: I0912 23:55:56.639260 3403 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:55:56.639429 kubelet[3403]: I0912 23:55:56.639386 3403 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:55:56.639527 kubelet[3403]: I0912 23:55:56.639510 3403 policy_none.go:49] "None policy: Start" Sep 12 23:55:56.639651 kubelet[3403]: I0912 23:55:56.639633 3403 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:55:56.639767 kubelet[3403]: I0912 23:55:56.639749 3403 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:55:56.640065 kubelet[3403]: I0912 23:55:56.640044 3403 state_mem.go:75] "Updated machine memory state" Sep 12 23:55:56.651783 kubelet[3403]: I0912 23:55:56.651747 3403 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:55:56.653311 kubelet[3403]: I0912 23:55:56.653262 3403 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:55:56.654813 kubelet[3403]: I0912 23:55:56.653607 3403 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:55:56.656137 kubelet[3403]: I0912 23:55:56.655712 3403 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:55:56.657189 kubelet[3403]: E0912 23:55:56.657153 3403 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:55:56.768062 kubelet[3403]: I0912 23:55:56.763747 3403 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:56.768062 kubelet[3403]: I0912 23:55:56.763747 3403 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-162" Sep 12 23:55:56.768062 kubelet[3403]: I0912 23:55:56.764484 3403 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:56.773270 kubelet[3403]: I0912 23:55:56.770905 3403 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-162" Sep 12 23:55:56.784982 kubelet[3403]: E0912 23:55:56.784893 3403 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-162\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-162" Sep 12 23:55:56.796619 kubelet[3403]: I0912 23:55:56.796527 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:56.796753 kubelet[3403]: I0912 23:55:56.796632 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60e2efceaabe01274b1274b27ed1affb-ca-certs\") pod \"kube-apiserver-ip-172-31-20-162\" (UID: \"60e2efceaabe01274b1274b27ed1affb\") " pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:56.796753 kubelet[3403]: I0912 23:55:56.796677 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60e2efceaabe01274b1274b27ed1affb-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-162\" (UID: \"60e2efceaabe01274b1274b27ed1affb\") " pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:56.796753 kubelet[3403]: I0912 23:55:56.796716 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:56.797206 kubelet[3403]: I0912 23:55:56.796751 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:56.797206 kubelet[3403]: I0912 23:55:56.796791 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:56.797206 kubelet[3403]: I0912 23:55:56.796827 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60e2efceaabe01274b1274b27ed1affb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-162\" (UID: \"60e2efceaabe01274b1274b27ed1affb\") " pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:56.797206 kubelet[3403]: I0912 23:55:56.796861 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2921e02f1b78dd95206912ff89004a1e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-162\" (UID: \"2921e02f1b78dd95206912ff89004a1e\") " pod="kube-system/kube-controller-manager-ip-172-31-20-162" Sep 12 23:55:56.797206 kubelet[3403]: I0912 23:55:56.796895 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/666391cb9e44ab4667bc4a1b9c30c8dc-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-162\" (UID: \"666391cb9e44ab4667bc4a1b9c30c8dc\") " pod="kube-system/kube-scheduler-ip-172-31-20-162" Sep 12 23:55:56.802038 kubelet[3403]: I0912 23:55:56.799915 3403 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-162" Sep 12 23:55:56.802038 kubelet[3403]: I0912 23:55:56.800034 3403 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-162" Sep 12 23:55:57.319583 kubelet[3403]: I0912 23:55:57.318406 3403 apiserver.go:52] "Watching apiserver" Sep 12 23:55:57.388909 kubelet[3403]: I0912 23:55:57.388833 3403 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:55:57.548689 kubelet[3403]: I0912 23:55:57.547864 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-162" podStartSLOduration=1.5478402660000001 podStartE2EDuration="1.547840266s" podCreationTimestamp="2025-09-12 23:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:57.531826566 +0000 UTC m=+1.361430980" watchObservedRunningTime="2025-09-12 23:55:57.547840266 +0000 UTC m=+1.377444668" Sep 12 23:55:57.562037 kubelet[3403]: I0912 23:55:57.561940 3403 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:57.581121 kubelet[3403]: E0912 23:55:57.580925 3403 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-162\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-162" Sep 12 23:55:57.592882 kubelet[3403]: I0912 23:55:57.592789 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-162" podStartSLOduration=2.592765998 podStartE2EDuration="2.592765998s" podCreationTimestamp="2025-09-12 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:57.549898266 +0000 UTC m=+1.379502668" watchObservedRunningTime="2025-09-12 23:55:57.592765998 +0000 UTC m=+1.422370388" Sep 12 23:55:57.620284 kubelet[3403]: I0912 23:55:57.620158 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-162" podStartSLOduration=1.6201309780000002 podStartE2EDuration="1.620130978s" podCreationTimestamp="2025-09-12 23:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:57.595807074 +0000 UTC m=+1.425411452" watchObservedRunningTime="2025-09-12 23:55:57.620130978 +0000 UTC m=+1.449735368" Sep 12 23:55:59.349982 kubelet[3403]: I0912 23:55:59.349920 3403 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:55:59.351162 containerd[2027]: time="2025-09-12T23:55:59.351089047Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:55:59.353196 kubelet[3403]: I0912 23:55:59.351968 3403 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:56:00.286563 systemd[1]: Created slice kubepods-besteffort-podd4fa6b54_2548_4360_afc2_53c8599d324b.slice - libcontainer container kubepods-besteffort-podd4fa6b54_2548_4360_afc2_53c8599d324b.slice. Sep 12 23:56:00.319494 kubelet[3403]: I0912 23:56:00.318911 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4fa6b54-2548-4360-afc2-53c8599d324b-kube-proxy\") pod \"kube-proxy-pzdbq\" (UID: \"d4fa6b54-2548-4360-afc2-53c8599d324b\") " pod="kube-system/kube-proxy-pzdbq" Sep 12 23:56:00.319494 kubelet[3403]: I0912 23:56:00.319044 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4fa6b54-2548-4360-afc2-53c8599d324b-xtables-lock\") pod \"kube-proxy-pzdbq\" (UID: \"d4fa6b54-2548-4360-afc2-53c8599d324b\") " pod="kube-system/kube-proxy-pzdbq" Sep 12 23:56:00.319494 kubelet[3403]: I0912 23:56:00.319098 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnkw2\" (UniqueName: \"kubernetes.io/projected/d4fa6b54-2548-4360-afc2-53c8599d324b-kube-api-access-jnkw2\") pod \"kube-proxy-pzdbq\" (UID: \"d4fa6b54-2548-4360-afc2-53c8599d324b\") " pod="kube-system/kube-proxy-pzdbq" Sep 12 23:56:00.319494 kubelet[3403]: I0912 23:56:00.319140 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4fa6b54-2548-4360-afc2-53c8599d324b-lib-modules\") pod \"kube-proxy-pzdbq\" (UID: \"d4fa6b54-2548-4360-afc2-53c8599d324b\") " pod="kube-system/kube-proxy-pzdbq" Sep 12 23:56:00.521899 systemd[1]: Created slice kubepods-besteffort-pod621ac99b_38dd_4570_a503_72b598be17ca.slice - libcontainer container kubepods-besteffort-pod621ac99b_38dd_4570_a503_72b598be17ca.slice. Sep 12 23:56:00.523847 kubelet[3403]: I0912 23:56:00.523151 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/621ac99b-38dd-4570-a503-72b598be17ca-var-lib-calico\") pod \"tigera-operator-755d956888-6vnws\" (UID: \"621ac99b-38dd-4570-a503-72b598be17ca\") " pod="tigera-operator/tigera-operator-755d956888-6vnws" Sep 12 23:56:00.523847 kubelet[3403]: I0912 23:56:00.523222 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lld5\" (UniqueName: \"kubernetes.io/projected/621ac99b-38dd-4570-a503-72b598be17ca-kube-api-access-8lld5\") pod \"tigera-operator-755d956888-6vnws\" (UID: \"621ac99b-38dd-4570-a503-72b598be17ca\") " pod="tigera-operator/tigera-operator-755d956888-6vnws" Sep 12 23:56:00.607256 containerd[2027]: time="2025-09-12T23:56:00.606906417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzdbq,Uid:d4fa6b54-2548-4360-afc2-53c8599d324b,Namespace:kube-system,Attempt:0,}" Sep 12 23:56:00.670908 containerd[2027]: time="2025-09-12T23:56:00.667827682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:00.670908 containerd[2027]: time="2025-09-12T23:56:00.667944274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:00.670908 containerd[2027]: time="2025-09-12T23:56:00.667990186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:00.670908 containerd[2027]: time="2025-09-12T23:56:00.668229802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:00.721206 systemd[1]: Started cri-containerd-c243140df67dcf6ffdd0a252b3f38100bb693c8ed129bab88a173140af39b848.scope - libcontainer container c243140df67dcf6ffdd0a252b3f38100bb693c8ed129bab88a173140af39b848. Sep 12 23:56:00.773301 containerd[2027]: time="2025-09-12T23:56:00.773240602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzdbq,Uid:d4fa6b54-2548-4360-afc2-53c8599d324b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c243140df67dcf6ffdd0a252b3f38100bb693c8ed129bab88a173140af39b848\"" Sep 12 23:56:00.781361 containerd[2027]: time="2025-09-12T23:56:00.781247098Z" level=info msg="CreateContainer within sandbox \"c243140df67dcf6ffdd0a252b3f38100bb693c8ed129bab88a173140af39b848\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:56:00.815790 containerd[2027]: time="2025-09-12T23:56:00.815525182Z" level=info msg="CreateContainer within sandbox \"c243140df67dcf6ffdd0a252b3f38100bb693c8ed129bab88a173140af39b848\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"001f4bcf2ffc921299aec6f9539a840ece6418c72c787410a59b5c07b20fded9\"" Sep 12 23:56:00.816875 containerd[2027]: time="2025-09-12T23:56:00.816758602Z" level=info msg="StartContainer for \"001f4bcf2ffc921299aec6f9539a840ece6418c72c787410a59b5c07b20fded9\"" Sep 12 23:56:00.828388 containerd[2027]: time="2025-09-12T23:56:00.827809834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6vnws,Uid:621ac99b-38dd-4570-a503-72b598be17ca,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:56:00.883870 systemd[1]: Started cri-containerd-001f4bcf2ffc921299aec6f9539a840ece6418c72c787410a59b5c07b20fded9.scope - libcontainer container 001f4bcf2ffc921299aec6f9539a840ece6418c72c787410a59b5c07b20fded9. Sep 12 23:56:00.893023 containerd[2027]: time="2025-09-12T23:56:00.892745099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:00.895400 containerd[2027]: time="2025-09-12T23:56:00.895284551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:00.896288 containerd[2027]: time="2025-09-12T23:56:00.896000087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:00.898697 containerd[2027]: time="2025-09-12T23:56:00.898090331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:00.939137 systemd[1]: Started cri-containerd-6e5b01bf6bb913cade01faf7039f84b073834454837687e87ab17768c3f7a7b2.scope - libcontainer container 6e5b01bf6bb913cade01faf7039f84b073834454837687e87ab17768c3f7a7b2. Sep 12 23:56:01.009588 containerd[2027]: time="2025-09-12T23:56:01.009249187Z" level=info msg="StartContainer for \"001f4bcf2ffc921299aec6f9539a840ece6418c72c787410a59b5c07b20fded9\" returns successfully" Sep 12 23:56:01.075383 containerd[2027]: time="2025-09-12T23:56:01.075322076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6vnws,Uid:621ac99b-38dd-4570-a503-72b598be17ca,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e5b01bf6bb913cade01faf7039f84b073834454837687e87ab17768c3f7a7b2\"" Sep 12 23:56:01.080776 containerd[2027]: time="2025-09-12T23:56:01.080658572Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:56:02.191726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017920897.mount: Deactivated successfully. Sep 12 23:56:03.023410 kubelet[3403]: I0912 23:56:03.023309 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pzdbq" podStartSLOduration=3.023285481 podStartE2EDuration="3.023285481s" podCreationTimestamp="2025-09-12 23:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:56:01.610925038 +0000 UTC m=+5.440529440" watchObservedRunningTime="2025-09-12 23:56:03.023285481 +0000 UTC m=+6.852889847" Sep 12 23:56:03.132917 containerd[2027]: time="2025-09-12T23:56:03.132831310Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:03.134489 containerd[2027]: time="2025-09-12T23:56:03.134430850Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 23:56:03.135601 containerd[2027]: time="2025-09-12T23:56:03.135422386Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:03.140629 containerd[2027]: time="2025-09-12T23:56:03.139833886Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:03.141790 containerd[2027]: time="2025-09-12T23:56:03.141722578Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.060937922s" Sep 12 23:56:03.141790 containerd[2027]: time="2025-09-12T23:56:03.141787438Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 23:56:03.147981 containerd[2027]: time="2025-09-12T23:56:03.147782422Z" level=info msg="CreateContainer within sandbox \"6e5b01bf6bb913cade01faf7039f84b073834454837687e87ab17768c3f7a7b2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:56:03.174977 containerd[2027]: time="2025-09-12T23:56:03.174793474Z" level=info msg="CreateContainer within sandbox \"6e5b01bf6bb913cade01faf7039f84b073834454837687e87ab17768c3f7a7b2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d\"" Sep 12 23:56:03.176182 containerd[2027]: time="2025-09-12T23:56:03.176013442Z" level=info msg="StartContainer for \"0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d\"" Sep 12 23:56:03.238883 systemd[1]: Started cri-containerd-0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d.scope - libcontainer container 0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d. Sep 12 23:56:03.291811 containerd[2027]: time="2025-09-12T23:56:03.291479627Z" level=info msg="StartContainer for \"0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d\" returns successfully" Sep 12 23:56:03.633006 kubelet[3403]: I0912 23:56:03.632348 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-6vnws" podStartSLOduration=1.5671175979999998 podStartE2EDuration="3.63232038s" podCreationTimestamp="2025-09-12 23:56:00 +0000 UTC" firstStartedPulling="2025-09-12 23:56:01.078684572 +0000 UTC m=+4.908288950" lastFinishedPulling="2025-09-12 23:56:03.143887366 +0000 UTC m=+6.973491732" observedRunningTime="2025-09-12 23:56:03.630696192 +0000 UTC m=+7.460300582" watchObservedRunningTime="2025-09-12 23:56:03.63232038 +0000 UTC m=+7.461924758" Sep 12 23:56:10.368696 sudo[2336]: pam_unix(sudo:session): session closed for user root Sep 12 23:56:10.392901 sshd[2333]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:10.399977 systemd[1]: sshd@6-172.31.20.162:22-147.75.109.163:57482.service: Deactivated successfully. Sep 12 23:56:10.409380 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:56:10.412865 systemd[1]: session-7.scope: Consumed 11.575s CPU time, 151.5M memory peak, 0B memory swap peak. Sep 12 23:56:10.419443 systemd-logind[1997]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:56:10.423868 systemd-logind[1997]: Removed session 7. Sep 12 23:56:24.650184 systemd[1]: Created slice kubepods-besteffort-pod4e26a377_946c_47ec_bb3a_8bdab0450bb2.slice - libcontainer container kubepods-besteffort-pod4e26a377_946c_47ec_bb3a_8bdab0450bb2.slice. Sep 12 23:56:24.686936 kubelet[3403]: I0912 23:56:24.686854 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnr9s\" (UniqueName: \"kubernetes.io/projected/4e26a377-946c-47ec-bb3a-8bdab0450bb2-kube-api-access-bnr9s\") pod \"calico-typha-5544f48b67-9spl2\" (UID: \"4e26a377-946c-47ec-bb3a-8bdab0450bb2\") " pod="calico-system/calico-typha-5544f48b67-9spl2" Sep 12 23:56:24.688003 kubelet[3403]: I0912 23:56:24.686945 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4e26a377-946c-47ec-bb3a-8bdab0450bb2-typha-certs\") pod \"calico-typha-5544f48b67-9spl2\" (UID: \"4e26a377-946c-47ec-bb3a-8bdab0450bb2\") " pod="calico-system/calico-typha-5544f48b67-9spl2" Sep 12 23:56:24.688003 kubelet[3403]: I0912 23:56:24.686990 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e26a377-946c-47ec-bb3a-8bdab0450bb2-tigera-ca-bundle\") pod \"calico-typha-5544f48b67-9spl2\" (UID: \"4e26a377-946c-47ec-bb3a-8bdab0450bb2\") " pod="calico-system/calico-typha-5544f48b67-9spl2" Sep 12 23:56:24.966701 containerd[2027]: time="2025-09-12T23:56:24.965899138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5544f48b67-9spl2,Uid:4e26a377-946c-47ec-bb3a-8bdab0450bb2,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:25.049674 containerd[2027]: time="2025-09-12T23:56:25.048181819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:25.052087 containerd[2027]: time="2025-09-12T23:56:25.050411251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:25.052087 containerd[2027]: time="2025-09-12T23:56:25.051650599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:25.052087 containerd[2027]: time="2025-09-12T23:56:25.051895651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:25.114896 systemd[1]: Started cri-containerd-5696bb19ec992520b48f41300a82cbb279595db7b290cc7c8a7c2f1ea132109c.scope - libcontainer container 5696bb19ec992520b48f41300a82cbb279595db7b290cc7c8a7c2f1ea132109c. Sep 12 23:56:25.172409 systemd[1]: Created slice kubepods-besteffort-podbc402d4f_257d_40a2_a484_a4c801cb61db.slice - libcontainer container kubepods-besteffort-podbc402d4f_257d_40a2_a484_a4c801cb61db.slice. Sep 12 23:56:25.191651 kubelet[3403]: I0912 23:56:25.191584 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc402d4f-257d-40a2-a484-a4c801cb61db-node-certs\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.192325 kubelet[3403]: I0912 23:56:25.192266 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-cni-bin-dir\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.192655 kubelet[3403]: I0912 23:56:25.192601 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-var-lib-calico\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.192955 kubelet[3403]: I0912 23:56:25.192871 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-xtables-lock\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.193769 kubelet[3403]: I0912 23:56:25.193433 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-cni-net-dir\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.194317 kubelet[3403]: I0912 23:56:25.193639 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-lib-modules\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.194317 kubelet[3403]: I0912 23:56:25.194114 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc402d4f-257d-40a2-a484-a4c801cb61db-tigera-ca-bundle\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.194317 kubelet[3403]: I0912 23:56:25.194451 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-var-run-calico\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.194317 kubelet[3403]: I0912 23:56:25.194508 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdsjl\" (UniqueName: \"kubernetes.io/projected/bc402d4f-257d-40a2-a484-a4c801cb61db-kube-api-access-sdsjl\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.194317 kubelet[3403]: I0912 23:56:25.194583 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-cni-log-dir\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.195175 kubelet[3403]: I0912 23:56:25.194715 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-flexvol-driver-host\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.195175 kubelet[3403]: I0912 23:56:25.194777 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc402d4f-257d-40a2-a484-a4c801cb61db-policysync\") pod \"calico-node-8tcct\" (UID: \"bc402d4f-257d-40a2-a484-a4c801cb61db\") " pod="calico-system/calico-node-8tcct" Sep 12 23:56:25.306069 kubelet[3403]: E0912 23:56:25.305974 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.308635 kubelet[3403]: W0912 23:56:25.306013 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.309278 kubelet[3403]: E0912 23:56:25.308764 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.310470 kubelet[3403]: E0912 23:56:25.310326 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.310934 kubelet[3403]: W0912 23:56:25.310686 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.310934 kubelet[3403]: E0912 23:56:25.310858 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.312670 kubelet[3403]: E0912 23:56:25.312456 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.313306 kubelet[3403]: W0912 23:56:25.313042 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.314023 kubelet[3403]: E0912 23:56:25.313097 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.319680 kubelet[3403]: E0912 23:56:25.317845 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.319680 kubelet[3403]: W0912 23:56:25.318031 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.320777 kubelet[3403]: E0912 23:56:25.320370 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.322223 kubelet[3403]: E0912 23:56:25.322071 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.322223 kubelet[3403]: W0912 23:56:25.322100 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.324918 kubelet[3403]: E0912 23:56:25.322824 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.325788 kubelet[3403]: E0912 23:56:25.325740 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.326030 kubelet[3403]: W0912 23:56:25.325960 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.326208 kubelet[3403]: E0912 23:56:25.326130 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.327128 kubelet[3403]: E0912 23:56:25.326981 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.327128 kubelet[3403]: W0912 23:56:25.327021 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.327128 kubelet[3403]: E0912 23:56:25.327055 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.331468 kubelet[3403]: E0912 23:56:25.331415 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.331468 kubelet[3403]: W0912 23:56:25.331456 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.332057 kubelet[3403]: E0912 23:56:25.331502 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.332793 kubelet[3403]: E0912 23:56:25.332729 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.332793 kubelet[3403]: W0912 23:56:25.332785 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.332974 kubelet[3403]: E0912 23:56:25.332819 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.335794 containerd[2027]: time="2025-09-12T23:56:25.335705648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5544f48b67-9spl2,Uid:4e26a377-946c-47ec-bb3a-8bdab0450bb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5696bb19ec992520b48f41300a82cbb279595db7b290cc7c8a7c2f1ea132109c\"" Sep 12 23:56:25.337759 kubelet[3403]: E0912 23:56:25.337705 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.337759 kubelet[3403]: W0912 23:56:25.337746 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.338118 kubelet[3403]: E0912 23:56:25.337782 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.341847 containerd[2027]: time="2025-09-12T23:56:25.341066480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:56:25.364654 kubelet[3403]: E0912 23:56:25.364414 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.364654 kubelet[3403]: W0912 23:56:25.364449 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.364654 kubelet[3403]: E0912 23:56:25.364504 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.369854 kubelet[3403]: E0912 23:56:25.369772 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zn6sm" podUID="f60d9f74-1f96-4c24-9612-25c7fd4febe8" Sep 12 23:56:25.373462 kubelet[3403]: E0912 23:56:25.373108 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.373462 kubelet[3403]: W0912 23:56:25.373190 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.373462 kubelet[3403]: E0912 23:56:25.373225 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.374532 kubelet[3403]: E0912 23:56:25.374450 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.374532 kubelet[3403]: W0912 23:56:25.374480 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.375285 kubelet[3403]: E0912 23:56:25.374624 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.375940 kubelet[3403]: E0912 23:56:25.375913 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.376592 kubelet[3403]: W0912 23:56:25.376008 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.376592 kubelet[3403]: E0912 23:56:25.376044 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.377797 kubelet[3403]: E0912 23:56:25.377727 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.378368 kubelet[3403]: W0912 23:56:25.377964 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.378368 kubelet[3403]: E0912 23:56:25.378004 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.380020 kubelet[3403]: E0912 23:56:25.379470 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.380020 kubelet[3403]: W0912 23:56:25.379680 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.380020 kubelet[3403]: E0912 23:56:25.379714 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.381387 kubelet[3403]: E0912 23:56:25.381127 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.381387 kubelet[3403]: W0912 23:56:25.381164 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.381387 kubelet[3403]: E0912 23:56:25.381197 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.382107 kubelet[3403]: E0912 23:56:25.382074 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.382769 kubelet[3403]: W0912 23:56:25.382305 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.382769 kubelet[3403]: E0912 23:56:25.382342 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.383855 kubelet[3403]: E0912 23:56:25.383235 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.383855 kubelet[3403]: W0912 23:56:25.383266 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.383855 kubelet[3403]: E0912 23:56:25.383295 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.384761 kubelet[3403]: E0912 23:56:25.384211 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.384761 kubelet[3403]: W0912 23:56:25.384242 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.384761 kubelet[3403]: E0912 23:56:25.384273 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.386216 kubelet[3403]: E0912 23:56:25.385960 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.386216 kubelet[3403]: W0912 23:56:25.386000 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.386216 kubelet[3403]: E0912 23:56:25.386037 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.387154 kubelet[3403]: E0912 23:56:25.387117 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.387738 kubelet[3403]: W0912 23:56:25.387305 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.387738 kubelet[3403]: E0912 23:56:25.387345 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.389032 kubelet[3403]: E0912 23:56:25.388993 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.389699 kubelet[3403]: W0912 23:56:25.389369 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.389699 kubelet[3403]: E0912 23:56:25.389412 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.394797 kubelet[3403]: E0912 23:56:25.394745 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.395161 kubelet[3403]: W0912 23:56:25.394898 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.395161 kubelet[3403]: E0912 23:56:25.394933 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.396111 kubelet[3403]: E0912 23:56:25.395931 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.396111 kubelet[3403]: W0912 23:56:25.395977 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.396111 kubelet[3403]: E0912 23:56:25.396009 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.397064 kubelet[3403]: E0912 23:56:25.397006 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.397477 kubelet[3403]: W0912 23:56:25.397149 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.397477 kubelet[3403]: E0912 23:56:25.397180 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.398876 kubelet[3403]: E0912 23:56:25.398635 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.398876 kubelet[3403]: W0912 23:56:25.398670 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.398876 kubelet[3403]: E0912 23:56:25.398702 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.401085 kubelet[3403]: E0912 23:56:25.400765 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.401085 kubelet[3403]: W0912 23:56:25.400802 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.401085 kubelet[3403]: E0912 23:56:25.400834 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.401490 kubelet[3403]: E0912 23:56:25.401461 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.401671 kubelet[3403]: W0912 23:56:25.401624 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.401872 kubelet[3403]: E0912 23:56:25.401823 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.402575 kubelet[3403]: E0912 23:56:25.402507 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.402775 kubelet[3403]: W0912 23:56:25.402749 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.403073 kubelet[3403]: E0912 23:56:25.402881 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.406301 kubelet[3403]: E0912 23:56:25.405763 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.406301 kubelet[3403]: W0912 23:56:25.405801 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.406301 kubelet[3403]: E0912 23:56:25.405834 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.406876 kubelet[3403]: E0912 23:56:25.406839 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.407093 kubelet[3403]: W0912 23:56:25.407062 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.407394 kubelet[3403]: E0912 23:56:25.407334 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.407822 kubelet[3403]: I0912 23:56:25.407705 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2d2p\" (UniqueName: \"kubernetes.io/projected/f60d9f74-1f96-4c24-9612-25c7fd4febe8-kube-api-access-t2d2p\") pod \"csi-node-driver-zn6sm\" (UID: \"f60d9f74-1f96-4c24-9612-25c7fd4febe8\") " pod="calico-system/csi-node-driver-zn6sm" Sep 12 23:56:25.408894 kubelet[3403]: E0912 23:56:25.408717 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.408894 kubelet[3403]: W0912 23:56:25.408772 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.408894 kubelet[3403]: E0912 23:56:25.408835 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.412136 kubelet[3403]: E0912 23:56:25.411686 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.412136 kubelet[3403]: W0912 23:56:25.411743 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.412136 kubelet[3403]: E0912 23:56:25.411882 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.413234 kubelet[3403]: E0912 23:56:25.412884 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.413234 kubelet[3403]: W0912 23:56:25.412921 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.413234 kubelet[3403]: E0912 23:56:25.412956 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.413234 kubelet[3403]: I0912 23:56:25.413010 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f60d9f74-1f96-4c24-9612-25c7fd4febe8-varrun\") pod \"csi-node-driver-zn6sm\" (UID: \"f60d9f74-1f96-4c24-9612-25c7fd4febe8\") " pod="calico-system/csi-node-driver-zn6sm" Sep 12 23:56:25.414878 kubelet[3403]: E0912 23:56:25.414330 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.414878 kubelet[3403]: W0912 23:56:25.414411 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.414878 kubelet[3403]: E0912 23:56:25.414817 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.415742 kubelet[3403]: I0912 23:56:25.415185 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f60d9f74-1f96-4c24-9612-25c7fd4febe8-registration-dir\") pod \"csi-node-driver-zn6sm\" (UID: \"f60d9f74-1f96-4c24-9612-25c7fd4febe8\") " pod="calico-system/csi-node-driver-zn6sm" Sep 12 23:56:25.417890 kubelet[3403]: E0912 23:56:25.416044 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.417890 kubelet[3403]: W0912 23:56:25.417740 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.418848 kubelet[3403]: E0912 23:56:25.418157 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.419701 kubelet[3403]: E0912 23:56:25.419668 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.419886 kubelet[3403]: W0912 23:56:25.419860 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.420245 kubelet[3403]: E0912 23:56:25.420181 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.423081 kubelet[3403]: E0912 23:56:25.422865 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.423081 kubelet[3403]: W0912 23:56:25.422897 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.423839 kubelet[3403]: E0912 23:56:25.423741 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.425338 kubelet[3403]: W0912 23:56:25.425146 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.426340 kubelet[3403]: E0912 23:56:25.426114 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.428620 kubelet[3403]: E0912 23:56:25.428225 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.428620 kubelet[3403]: W0912 23:56:25.428257 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.428620 kubelet[3403]: E0912 23:56:25.428289 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.428620 kubelet[3403]: E0912 23:56:25.424509 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.428620 kubelet[3403]: I0912 23:56:25.428364 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f60d9f74-1f96-4c24-9612-25c7fd4febe8-socket-dir\") pod \"csi-node-driver-zn6sm\" (UID: \"f60d9f74-1f96-4c24-9612-25c7fd4febe8\") " pod="calico-system/csi-node-driver-zn6sm" Sep 12 23:56:25.430503 kubelet[3403]: E0912 23:56:25.430187 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.430503 kubelet[3403]: W0912 23:56:25.430243 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.430503 kubelet[3403]: E0912 23:56:25.430275 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.430503 kubelet[3403]: I0912 23:56:25.430343 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f60d9f74-1f96-4c24-9612-25c7fd4febe8-kubelet-dir\") pod \"csi-node-driver-zn6sm\" (UID: \"f60d9f74-1f96-4c24-9612-25c7fd4febe8\") " pod="calico-system/csi-node-driver-zn6sm" Sep 12 23:56:25.432258 kubelet[3403]: E0912 23:56:25.431882 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.432258 kubelet[3403]: W0912 23:56:25.431922 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.432258 kubelet[3403]: E0912 23:56:25.431955 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.432951 kubelet[3403]: E0912 23:56:25.432727 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.432951 kubelet[3403]: W0912 23:56:25.432792 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.432951 kubelet[3403]: E0912 23:56:25.432821 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.434941 kubelet[3403]: E0912 23:56:25.434893 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.435399 kubelet[3403]: W0912 23:56:25.435134 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.435399 kubelet[3403]: E0912 23:56:25.435175 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.436898 kubelet[3403]: E0912 23:56:25.436727 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.436898 kubelet[3403]: W0912 23:56:25.436793 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.436898 kubelet[3403]: E0912 23:56:25.436838 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.481193 containerd[2027]: time="2025-09-12T23:56:25.481010445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8tcct,Uid:bc402d4f-257d-40a2-a484-a4c801cb61db,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:25.532895 kubelet[3403]: E0912 23:56:25.532830 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.532895 kubelet[3403]: W0912 23:56:25.532871 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.533596 kubelet[3403]: E0912 23:56:25.532913 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.535025 kubelet[3403]: E0912 23:56:25.534690 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.535025 kubelet[3403]: W0912 23:56:25.534730 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.535025 kubelet[3403]: E0912 23:56:25.534778 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.537364 kubelet[3403]: E0912 23:56:25.536839 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.537364 kubelet[3403]: W0912 23:56:25.536875 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.538589 kubelet[3403]: E0912 23:56:25.538424 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.538589 kubelet[3403]: E0912 23:56:25.538463 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.538589 kubelet[3403]: W0912 23:56:25.538483 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.541242 kubelet[3403]: E0912 23:56:25.538938 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.541242 kubelet[3403]: E0912 23:56:25.540677 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.541242 kubelet[3403]: W0912 23:56:25.540708 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.541242 kubelet[3403]: E0912 23:56:25.540760 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.541973 kubelet[3403]: E0912 23:56:25.541871 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.541973 kubelet[3403]: W0912 23:56:25.541907 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.542862 kubelet[3403]: E0912 23:56:25.542820 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.542862 kubelet[3403]: W0912 23:56:25.542855 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.543755 kubelet[3403]: E0912 23:56:25.543521 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.544115 kubelet[3403]: E0912 23:56:25.543996 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.545845 kubelet[3403]: E0912 23:56:25.544668 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.545845 kubelet[3403]: W0912 23:56:25.544705 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.546469 kubelet[3403]: E0912 23:56:25.546410 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.546469 kubelet[3403]: W0912 23:56:25.546457 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.547310 kubelet[3403]: E0912 23:56:25.547078 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.547310 kubelet[3403]: E0912 23:56:25.547148 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.548501 kubelet[3403]: E0912 23:56:25.548457 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.548501 kubelet[3403]: W0912 23:56:25.548496 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.549491 kubelet[3403]: E0912 23:56:25.549439 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.552802 kubelet[3403]: E0912 23:56:25.552734 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.552802 kubelet[3403]: W0912 23:56:25.552790 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.553559 kubelet[3403]: E0912 23:56:25.553480 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.554737 kubelet[3403]: E0912 23:56:25.554688 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.554737 kubelet[3403]: W0912 23:56:25.554728 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.555015 kubelet[3403]: E0912 23:56:25.554964 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.555168 kubelet[3403]: E0912 23:56:25.555133 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.555168 kubelet[3403]: W0912 23:56:25.555161 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.556512 kubelet[3403]: E0912 23:56:25.555633 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.556788 containerd[2027]: time="2025-09-12T23:56:25.555930573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:25.556788 containerd[2027]: time="2025-09-12T23:56:25.556058421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:25.556788 containerd[2027]: time="2025-09-12T23:56:25.556097397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:25.556788 containerd[2027]: time="2025-09-12T23:56:25.556304409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:25.557794 kubelet[3403]: E0912 23:56:25.557442 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.557794 kubelet[3403]: W0912 23:56:25.557468 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.557794 kubelet[3403]: E0912 23:56:25.557559 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.564068 kubelet[3403]: E0912 23:56:25.560601 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.564068 kubelet[3403]: W0912 23:56:25.560648 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.564068 kubelet[3403]: E0912 23:56:25.560720 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.564068 kubelet[3403]: E0912 23:56:25.561192 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.564068 kubelet[3403]: W0912 23:56:25.561214 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.564068 kubelet[3403]: E0912 23:56:25.562739 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.564068 kubelet[3403]: E0912 23:56:25.563384 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.564068 kubelet[3403]: W0912 23:56:25.563406 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.564068 kubelet[3403]: E0912 23:56:25.563487 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.566007 kubelet[3403]: E0912 23:56:25.565702 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.566007 kubelet[3403]: W0912 23:56:25.565733 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.566786 kubelet[3403]: E0912 23:56:25.566687 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.568627 kubelet[3403]: E0912 23:56:25.568589 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.569135 kubelet[3403]: W0912 23:56:25.569034 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.571054 kubelet[3403]: E0912 23:56:25.570672 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.574452 kubelet[3403]: E0912 23:56:25.572929 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.574452 kubelet[3403]: W0912 23:56:25.572967 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.576046 kubelet[3403]: E0912 23:56:25.575697 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.576859 kubelet[3403]: E0912 23:56:25.576685 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.576859 kubelet[3403]: W0912 23:56:25.576722 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.578338 kubelet[3403]: E0912 23:56:25.577626 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.579346 kubelet[3403]: E0912 23:56:25.579136 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.579346 kubelet[3403]: W0912 23:56:25.579191 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.579346 kubelet[3403]: E0912 23:56:25.579265 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.582935 kubelet[3403]: E0912 23:56:25.582893 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.583341 kubelet[3403]: W0912 23:56:25.583160 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.583341 kubelet[3403]: E0912 23:56:25.583250 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.584721 kubelet[3403]: E0912 23:56:25.584378 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.584721 kubelet[3403]: W0912 23:56:25.584412 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.584721 kubelet[3403]: E0912 23:56:25.584461 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.587387 kubelet[3403]: E0912 23:56:25.586952 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.587387 kubelet[3403]: W0912 23:56:25.586988 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.587387 kubelet[3403]: E0912 23:56:25.587021 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:25.627050 systemd[1]: Started cri-containerd-a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab.scope - libcontainer container a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab. Sep 12 23:56:25.749586 containerd[2027]: time="2025-09-12T23:56:25.749351446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8tcct,Uid:bc402d4f-257d-40a2-a484-a4c801cb61db,Namespace:calico-system,Attempt:0,} returns sandbox id \"a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab\"" Sep 12 23:56:25.762896 kubelet[3403]: E0912 23:56:25.762041 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:25.762896 kubelet[3403]: W0912 23:56:25.762076 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:25.762896 kubelet[3403]: E0912 23:56:25.762108 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:26.999738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132689396.mount: Deactivated successfully. Sep 12 23:56:27.464441 kubelet[3403]: E0912 23:56:27.462881 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zn6sm" podUID="f60d9f74-1f96-4c24-9612-25c7fd4febe8" Sep 12 23:56:28.387928 containerd[2027]: time="2025-09-12T23:56:28.385800659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:28.387928 containerd[2027]: time="2025-09-12T23:56:28.387155267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 23:56:28.390016 containerd[2027]: time="2025-09-12T23:56:28.388205711Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:28.392958 containerd[2027]: time="2025-09-12T23:56:28.392787239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:28.396101 containerd[2027]: time="2025-09-12T23:56:28.396021251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 3.053904315s" Sep 12 23:56:28.396101 containerd[2027]: time="2025-09-12T23:56:28.396092519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 23:56:28.402711 containerd[2027]: time="2025-09-12T23:56:28.401108387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:56:28.430060 containerd[2027]: time="2025-09-12T23:56:28.429966107Z" level=info msg="CreateContainer within sandbox \"5696bb19ec992520b48f41300a82cbb279595db7b290cc7c8a7c2f1ea132109c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:56:28.465871 containerd[2027]: time="2025-09-12T23:56:28.465688716Z" level=info msg="CreateContainer within sandbox \"5696bb19ec992520b48f41300a82cbb279595db7b290cc7c8a7c2f1ea132109c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"db1b3a37991597f9e286faf29eaeb3fdb42f63b3ab9275d243d8cb1905f94ccf\"" Sep 12 23:56:28.469567 containerd[2027]: time="2025-09-12T23:56:28.469490604Z" level=info msg="StartContainer for \"db1b3a37991597f9e286faf29eaeb3fdb42f63b3ab9275d243d8cb1905f94ccf\"" Sep 12 23:56:28.556377 systemd[1]: Started cri-containerd-db1b3a37991597f9e286faf29eaeb3fdb42f63b3ab9275d243d8cb1905f94ccf.scope - libcontainer container db1b3a37991597f9e286faf29eaeb3fdb42f63b3ab9275d243d8cb1905f94ccf. Sep 12 23:56:28.665609 containerd[2027]: time="2025-09-12T23:56:28.661678453Z" level=info msg="StartContainer for \"db1b3a37991597f9e286faf29eaeb3fdb42f63b3ab9275d243d8cb1905f94ccf\" returns successfully" Sep 12 23:56:28.725856 kubelet[3403]: I0912 23:56:28.725752 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5544f48b67-9spl2" podStartSLOduration=1.665809474 podStartE2EDuration="4.725728933s" podCreationTimestamp="2025-09-12 23:56:24 +0000 UTC" firstStartedPulling="2025-09-12 23:56:25.339853448 +0000 UTC m=+29.169457826" lastFinishedPulling="2025-09-12 23:56:28.399772907 +0000 UTC m=+32.229377285" observedRunningTime="2025-09-12 23:56:28.725463625 +0000 UTC m=+32.555068015" watchObservedRunningTime="2025-09-12 23:56:28.725728933 +0000 UTC m=+32.555333311" Sep 12 23:56:28.744458 kubelet[3403]: E0912 23:56:28.744195 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.744458 kubelet[3403]: W0912 23:56:28.744231 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.744458 kubelet[3403]: E0912 23:56:28.744265 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.745999 kubelet[3403]: E0912 23:56:28.745785 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.746993 kubelet[3403]: W0912 23:56:28.746270 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.746993 kubelet[3403]: E0912 23:56:28.746364 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.748154 kubelet[3403]: E0912 23:56:28.747795 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.748154 kubelet[3403]: W0912 23:56:28.747835 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.748154 kubelet[3403]: E0912 23:56:28.747867 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.750500 kubelet[3403]: E0912 23:56:28.750202 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.750500 kubelet[3403]: W0912 23:56:28.750240 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.750500 kubelet[3403]: E0912 23:56:28.750274 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.751613 kubelet[3403]: E0912 23:56:28.751089 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.751613 kubelet[3403]: W0912 23:56:28.751119 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.751613 kubelet[3403]: E0912 23:56:28.751151 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.753442 kubelet[3403]: E0912 23:56:28.753060 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.753442 kubelet[3403]: W0912 23:56:28.753100 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.753442 kubelet[3403]: E0912 23:56:28.753131 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.757002 kubelet[3403]: E0912 23:56:28.756670 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.757002 kubelet[3403]: W0912 23:56:28.756711 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.757002 kubelet[3403]: E0912 23:56:28.756763 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.757435 kubelet[3403]: E0912 23:56:28.757407 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.757788 kubelet[3403]: W0912 23:56:28.757520 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.757788 kubelet[3403]: E0912 23:56:28.757589 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.758019 kubelet[3403]: E0912 23:56:28.757997 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.758152 kubelet[3403]: W0912 23:56:28.758127 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.758263 kubelet[3403]: E0912 23:56:28.758240 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.759748 kubelet[3403]: E0912 23:56:28.759705 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.760405 kubelet[3403]: W0912 23:56:28.759900 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.760405 kubelet[3403]: E0912 23:56:28.759940 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.760974 kubelet[3403]: E0912 23:56:28.760851 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.761233 kubelet[3403]: W0912 23:56:28.761202 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.761462 kubelet[3403]: E0912 23:56:28.761435 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.763146 kubelet[3403]: E0912 23:56:28.762504 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.763146 kubelet[3403]: W0912 23:56:28.762565 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.763146 kubelet[3403]: E0912 23:56:28.762599 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.764845 kubelet[3403]: E0912 23:56:28.764067 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.764845 kubelet[3403]: W0912 23:56:28.764101 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.764845 kubelet[3403]: E0912 23:56:28.764133 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.765718 kubelet[3403]: E0912 23:56:28.765683 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.765860 kubelet[3403]: W0912 23:56:28.765833 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.766585 kubelet[3403]: E0912 23:56:28.765949 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.768918 kubelet[3403]: E0912 23:56:28.766808 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.768918 kubelet[3403]: W0912 23:56:28.768328 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.768918 kubelet[3403]: E0912 23:56:28.768373 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.777194 kubelet[3403]: E0912 23:56:28.777142 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.777194 kubelet[3403]: W0912 23:56:28.777181 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.777773 kubelet[3403]: E0912 23:56:28.777216 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.778912 kubelet[3403]: E0912 23:56:28.777823 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.778912 kubelet[3403]: W0912 23:56:28.777847 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.778912 kubelet[3403]: E0912 23:56:28.777909 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.778912 kubelet[3403]: E0912 23:56:28.778378 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.778912 kubelet[3403]: W0912 23:56:28.778403 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.778912 kubelet[3403]: E0912 23:56:28.778474 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.782719 kubelet[3403]: E0912 23:56:28.780400 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.782719 kubelet[3403]: W0912 23:56:28.781669 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.782719 kubelet[3403]: E0912 23:56:28.781705 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.783944 kubelet[3403]: E0912 23:56:28.783139 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.783944 kubelet[3403]: W0912 23:56:28.783192 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.783944 kubelet[3403]: E0912 23:56:28.783226 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.786915 kubelet[3403]: E0912 23:56:28.786201 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.786915 kubelet[3403]: W0912 23:56:28.786238 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.787380 kubelet[3403]: E0912 23:56:28.787279 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.787380 kubelet[3403]: W0912 23:56:28.787319 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.788782 kubelet[3403]: E0912 23:56:28.788741 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.788972 kubelet[3403]: E0912 23:56:28.788772 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.789104 kubelet[3403]: W0912 23:56:28.789078 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.789231 kubelet[3403]: E0912 23:56:28.789208 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.789491 kubelet[3403]: E0912 23:56:28.788788 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.790578 kubelet[3403]: E0912 23:56:28.790291 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.790578 kubelet[3403]: W0912 23:56:28.790323 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.790578 kubelet[3403]: E0912 23:56:28.790367 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.791167 kubelet[3403]: E0912 23:56:28.790947 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.791167 kubelet[3403]: W0912 23:56:28.790973 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.791167 kubelet[3403]: E0912 23:56:28.791037 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.791483 kubelet[3403]: E0912 23:56:28.791459 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.791600 kubelet[3403]: W0912 23:56:28.791577 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.791772 kubelet[3403]: E0912 23:56:28.791734 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.792210 kubelet[3403]: E0912 23:56:28.792183 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.792618 kubelet[3403]: W0912 23:56:28.792330 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.792618 kubelet[3403]: E0912 23:56:28.792411 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.794411 kubelet[3403]: E0912 23:56:28.793698 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.794411 kubelet[3403]: W0912 23:56:28.793728 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.794411 kubelet[3403]: E0912 23:56:28.793798 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.795457 kubelet[3403]: E0912 23:56:28.794932 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.795457 kubelet[3403]: W0912 23:56:28.794971 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.795457 kubelet[3403]: E0912 23:56:28.795092 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.797360 kubelet[3403]: E0912 23:56:28.797307 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.798150 kubelet[3403]: W0912 23:56:28.797852 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.798150 kubelet[3403]: E0912 23:56:28.797914 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.798797 kubelet[3403]: E0912 23:56:28.798755 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.799008 kubelet[3403]: W0912 23:56:28.798977 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.799191 kubelet[3403]: E0912 23:56:28.799157 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.799693 kubelet[3403]: E0912 23:56:28.799659 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.799693 kubelet[3403]: W0912 23:56:28.799691 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.799836 kubelet[3403]: E0912 23:56:28.799724 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:28.800824 kubelet[3403]: E0912 23:56:28.800777 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:28.800824 kubelet[3403]: W0912 23:56:28.800812 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:28.801031 kubelet[3403]: E0912 23:56:28.800842 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.461825 kubelet[3403]: E0912 23:56:29.461728 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zn6sm" podUID="f60d9f74-1f96-4c24-9612-25c7fd4febe8" Sep 12 23:56:29.705949 kubelet[3403]: I0912 23:56:29.705880 3403 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:56:29.770404 containerd[2027]: time="2025-09-12T23:56:29.770229062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:29.773754 containerd[2027]: time="2025-09-12T23:56:29.773650622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 23:56:29.775238 kubelet[3403]: E0912 23:56:29.775200 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.777440 kubelet[3403]: W0912 23:56:29.775911 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.777440 kubelet[3403]: E0912 23:56:29.776361 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.778286 containerd[2027]: time="2025-09-12T23:56:29.777163826Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:29.781795 kubelet[3403]: E0912 23:56:29.780594 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.781795 kubelet[3403]: W0912 23:56:29.780647 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.781795 kubelet[3403]: E0912 23:56:29.780877 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.783049 kubelet[3403]: E0912 23:56:29.782217 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.783049 kubelet[3403]: W0912 23:56:29.782273 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.783049 kubelet[3403]: E0912 23:56:29.782308 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.784310 kubelet[3403]: E0912 23:56:29.783567 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.784310 kubelet[3403]: W0912 23:56:29.783602 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.784310 kubelet[3403]: E0912 23:56:29.783633 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.787600 kubelet[3403]: E0912 23:56:29.786137 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.787600 kubelet[3403]: W0912 23:56:29.786175 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.787600 kubelet[3403]: E0912 23:56:29.786208 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.791998 kubelet[3403]: E0912 23:56:29.791737 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.791998 kubelet[3403]: W0912 23:56:29.791774 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.791998 kubelet[3403]: E0912 23:56:29.791808 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.795009 containerd[2027]: time="2025-09-12T23:56:29.794201414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:29.796004 kubelet[3403]: E0912 23:56:29.795373 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.796004 kubelet[3403]: W0912 23:56:29.795407 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.796004 kubelet[3403]: E0912 23:56:29.795439 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.797184 kubelet[3403]: E0912 23:56:29.796771 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.797376 kubelet[3403]: W0912 23:56:29.797347 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.797493 kubelet[3403]: E0912 23:56:29.797466 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.799011 kubelet[3403]: E0912 23:56:29.798882 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.800699 kubelet[3403]: W0912 23:56:29.799336 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.800699 kubelet[3403]: E0912 23:56:29.799382 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.801467 kubelet[3403]: E0912 23:56:29.801229 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.801467 kubelet[3403]: W0912 23:56:29.801263 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.801467 kubelet[3403]: E0912 23:56:29.801294 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.803537 kubelet[3403]: E0912 23:56:29.803401 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.804048 kubelet[3403]: W0912 23:56:29.803909 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.804466 kubelet[3403]: E0912 23:56:29.804317 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.805051 containerd[2027]: time="2025-09-12T23:56:29.804763910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.403541211s" Sep 12 23:56:29.805051 containerd[2027]: time="2025-09-12T23:56:29.804833498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:56:29.807049 kubelet[3403]: E0912 23:56:29.806403 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.807049 kubelet[3403]: W0912 23:56:29.806438 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.807049 kubelet[3403]: E0912 23:56:29.806471 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.808568 kubelet[3403]: E0912 23:56:29.808083 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.808568 kubelet[3403]: W0912 23:56:29.808111 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.808568 kubelet[3403]: E0912 23:56:29.808140 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.810430 kubelet[3403]: E0912 23:56:29.809784 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.810430 kubelet[3403]: W0912 23:56:29.809817 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.810430 kubelet[3403]: E0912 23:56:29.809849 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.812089 kubelet[3403]: E0912 23:56:29.811772 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.812089 kubelet[3403]: W0912 23:56:29.811807 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.812089 kubelet[3403]: E0912 23:56:29.811837 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.816258 containerd[2027]: time="2025-09-12T23:56:29.816170918Z" level=info msg="CreateContainer within sandbox \"a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:56:29.816937 kubelet[3403]: E0912 23:56:29.816529 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.816937 kubelet[3403]: W0912 23:56:29.816644 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.816937 kubelet[3403]: E0912 23:56:29.816678 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.817881 kubelet[3403]: E0912 23:56:29.817846 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.819188 kubelet[3403]: W0912 23:56:29.818871 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.819188 kubelet[3403]: E0912 23:56:29.818929 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.821222 kubelet[3403]: E0912 23:56:29.820624 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.821222 kubelet[3403]: W0912 23:56:29.820658 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.821222 kubelet[3403]: E0912 23:56:29.820704 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.822467 kubelet[3403]: E0912 23:56:29.822409 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.822467 kubelet[3403]: W0912 23:56:29.822451 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.822749 kubelet[3403]: E0912 23:56:29.822488 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.824949 kubelet[3403]: E0912 23:56:29.824903 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.824949 kubelet[3403]: W0912 23:56:29.824943 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.825281 kubelet[3403]: E0912 23:56:29.825208 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.827194 kubelet[3403]: E0912 23:56:29.827133 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.827194 kubelet[3403]: W0912 23:56:29.827176 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.827194 kubelet[3403]: E0912 23:56:29.827340 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.829499 kubelet[3403]: E0912 23:56:29.829235 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.829499 kubelet[3403]: W0912 23:56:29.829277 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.829499 kubelet[3403]: E0912 23:56:29.829440 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.832592 kubelet[3403]: E0912 23:56:29.832182 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.832592 kubelet[3403]: W0912 23:56:29.832224 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.833750 kubelet[3403]: E0912 23:56:29.832864 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.833750 kubelet[3403]: E0912 23:56:29.833005 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.833750 kubelet[3403]: W0912 23:56:29.833024 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.833750 kubelet[3403]: E0912 23:56:29.833696 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.834061 kubelet[3403]: E0912 23:56:29.833841 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.834061 kubelet[3403]: W0912 23:56:29.833860 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.834061 kubelet[3403]: E0912 23:56:29.833887 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.834854 kubelet[3403]: E0912 23:56:29.834799 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.834854 kubelet[3403]: W0912 23:56:29.834837 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.835485 kubelet[3403]: E0912 23:56:29.835083 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.836811 kubelet[3403]: E0912 23:56:29.836732 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.836811 kubelet[3403]: W0912 23:56:29.836787 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.838475 kubelet[3403]: E0912 23:56:29.838021 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.839092 kubelet[3403]: E0912 23:56:29.839030 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.839227 kubelet[3403]: W0912 23:56:29.839075 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.841416 kubelet[3403]: E0912 23:56:29.839530 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.841416 kubelet[3403]: E0912 23:56:29.839860 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.841416 kubelet[3403]: W0912 23:56:29.840623 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.841718 kubelet[3403]: E0912 23:56:29.841388 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.842872 kubelet[3403]: E0912 23:56:29.842815 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.843005 kubelet[3403]: W0912 23:56:29.842859 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.843579 kubelet[3403]: E0912 23:56:29.843317 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.845153 kubelet[3403]: E0912 23:56:29.845107 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.845153 kubelet[3403]: W0912 23:56:29.845147 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.845486 kubelet[3403]: E0912 23:56:29.845441 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.846471 kubelet[3403]: E0912 23:56:29.846415 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.846621 kubelet[3403]: W0912 23:56:29.846457 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.846621 kubelet[3403]: E0912 23:56:29.846529 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.848500 kubelet[3403]: E0912 23:56:29.848023 3403 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:56:29.848500 kubelet[3403]: W0912 23:56:29.848052 3403 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:56:29.848500 kubelet[3403]: E0912 23:56:29.848082 3403 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:56:29.875676 containerd[2027]: time="2025-09-12T23:56:29.875355231Z" level=info msg="CreateContainer within sandbox \"a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc\"" Sep 12 23:56:29.876698 containerd[2027]: time="2025-09-12T23:56:29.876627015Z" level=info msg="StartContainer for \"589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc\"" Sep 12 23:56:29.969172 systemd[1]: Started cri-containerd-589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc.scope - libcontainer container 589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc. Sep 12 23:56:30.032035 containerd[2027]: time="2025-09-12T23:56:30.031520447Z" level=info msg="StartContainer for \"589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc\" returns successfully" Sep 12 23:56:30.067745 systemd[1]: cri-containerd-589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc.scope: Deactivated successfully. Sep 12 23:56:30.118265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc-rootfs.mount: Deactivated successfully. Sep 12 23:56:30.473887 containerd[2027]: time="2025-09-12T23:56:30.473656418Z" level=info msg="shim disconnected" id=589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc namespace=k8s.io Sep 12 23:56:30.473887 containerd[2027]: time="2025-09-12T23:56:30.473733194Z" level=warning msg="cleaning up after shim disconnected" id=589432552c4151aff9824f3864ea03d39c33b809efe5c002ac53e83b86c275fc namespace=k8s.io Sep 12 23:56:30.473887 containerd[2027]: time="2025-09-12T23:56:30.473755322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:30.715104 containerd[2027]: time="2025-09-12T23:56:30.714957567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:56:31.461484 kubelet[3403]: E0912 23:56:31.461410 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zn6sm" podUID="f60d9f74-1f96-4c24-9612-25c7fd4febe8" Sep 12 23:56:33.461285 kubelet[3403]: E0912 23:56:33.461218 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zn6sm" podUID="f60d9f74-1f96-4c24-9612-25c7fd4febe8" Sep 12 23:56:33.550871 containerd[2027]: time="2025-09-12T23:56:33.550791461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:33.551940 containerd[2027]: time="2025-09-12T23:56:33.551880833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 23:56:33.553581 containerd[2027]: time="2025-09-12T23:56:33.553388549Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:33.558046 containerd[2027]: time="2025-09-12T23:56:33.557991869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:33.560577 containerd[2027]: time="2025-09-12T23:56:33.559608329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.844563246s" Sep 12 23:56:33.560577 containerd[2027]: time="2025-09-12T23:56:33.559667213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 23:56:33.565405 containerd[2027]: time="2025-09-12T23:56:33.565218521Z" level=info msg="CreateContainer within sandbox \"a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:56:33.590224 containerd[2027]: time="2025-09-12T23:56:33.590005637Z" level=info msg="CreateContainer within sandbox \"a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00\"" Sep 12 23:56:33.591374 containerd[2027]: time="2025-09-12T23:56:33.591287597Z" level=info msg="StartContainer for \"c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00\"" Sep 12 23:56:33.661931 systemd[1]: Started cri-containerd-c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00.scope - libcontainer container c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00. Sep 12 23:56:33.729672 containerd[2027]: time="2025-09-12T23:56:33.729412170Z" level=info msg="StartContainer for \"c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00\" returns successfully" Sep 12 23:56:34.855968 containerd[2027]: time="2025-09-12T23:56:34.855897211Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:56:34.861625 systemd[1]: cri-containerd-c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00.scope: Deactivated successfully. Sep 12 23:56:34.864359 systemd[1]: cri-containerd-c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00.scope: Consumed 1.014s CPU time. Sep 12 23:56:34.906361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00-rootfs.mount: Deactivated successfully. Sep 12 23:56:34.914111 kubelet[3403]: I0912 23:56:34.913870 3403 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:56:35.005579 kubelet[3403]: I0912 23:56:35.003916 3403 status_manager.go:890] "Failed to get status for pod" podUID="6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e" pod="kube-system/coredns-668d6bf9bc-vm44c" err="pods \"coredns-668d6bf9bc-vm44c\" is forbidden: User \"system:node:ip-172-31-20-162\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-20-162' and this object" Sep 12 23:56:35.006418 kubelet[3403]: W0912 23:56:35.006173 3403 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-20-162" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-162' and this object Sep 12 23:56:35.010746 kubelet[3403]: E0912 23:56:35.006619 3403 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-20-162\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-20-162' and this object" logger="UnhandledError" Sep 12 23:56:35.015316 systemd[1]: Created slice kubepods-burstable-pod6dbd52fd_63e0_48d0_bb70_dd7ef684ad7e.slice - libcontainer container kubepods-burstable-pod6dbd52fd_63e0_48d0_bb70_dd7ef684ad7e.slice. Sep 12 23:56:35.067307 kubelet[3403]: I0912 23:56:35.067027 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd4xb\" (UniqueName: \"kubernetes.io/projected/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-kube-api-access-rd4xb\") pod \"whisker-b884ffb96-z8ck8\" (UID: \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\") " pod="calico-system/whisker-b884ffb96-z8ck8" Sep 12 23:56:35.067307 kubelet[3403]: I0912 23:56:35.067118 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-backend-key-pair\") pod \"whisker-b884ffb96-z8ck8\" (UID: \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\") " pod="calico-system/whisker-b884ffb96-z8ck8" Sep 12 23:56:35.067307 kubelet[3403]: I0912 23:56:35.067223 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-ca-bundle\") pod \"whisker-b884ffb96-z8ck8\" (UID: \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\") " pod="calico-system/whisker-b884ffb96-z8ck8" Sep 12 23:56:35.105344 systemd[1]: Created slice kubepods-burstable-podc1a3fe73_b569_44ae_8604_d1dae69d130b.slice - libcontainer container kubepods-burstable-podc1a3fe73_b569_44ae_8604_d1dae69d130b.slice. Sep 12 23:56:35.123745 systemd[1]: Created slice kubepods-besteffort-podbc37ffcb_5047_4cd5_a85c_ffd3b1486c49.slice - libcontainer container kubepods-besteffort-podbc37ffcb_5047_4cd5_a85c_ffd3b1486c49.slice. Sep 12 23:56:35.147832 systemd[1]: Created slice kubepods-besteffort-pode0c64f6a_d5cd_4c90_b7b6_77627d265c99.slice - libcontainer container kubepods-besteffort-pode0c64f6a_d5cd_4c90_b7b6_77627d265c99.slice. Sep 12 23:56:35.168761 systemd[1]: Created slice kubepods-besteffort-podeeb53fe4_c09c_4147_b78d_113cabffc7ee.slice - libcontainer container kubepods-besteffort-podeeb53fe4_c09c_4147_b78d_113cabffc7ee.slice. Sep 12 23:56:35.174484 kubelet[3403]: I0912 23:56:35.174219 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e-config-volume\") pod \"coredns-668d6bf9bc-vm44c\" (UID: \"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e\") " pod="kube-system/coredns-668d6bf9bc-vm44c" Sep 12 23:56:35.186337 kubelet[3403]: I0912 23:56:35.185890 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlxk8\" (UniqueName: \"kubernetes.io/projected/e0c64f6a-d5cd-4c90-b7b6-77627d265c99-kube-api-access-hlxk8\") pod \"goldmane-54d579b49d-6mb7r\" (UID: \"e0c64f6a-d5cd-4c90-b7b6-77627d265c99\") " pod="calico-system/goldmane-54d579b49d-6mb7r" Sep 12 23:56:35.189040 kubelet[3403]: I0912 23:56:35.188995 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssbjs\" (UniqueName: \"kubernetes.io/projected/64bd2045-89ff-4c72-8c98-7616c71dee49-kube-api-access-ssbjs\") pod \"calico-apiserver-88f4475-t5fsb\" (UID: \"64bd2045-89ff-4c72-8c98-7616c71dee49\") " pod="calico-apiserver/calico-apiserver-88f4475-t5fsb" Sep 12 23:56:35.189293 kubelet[3403]: I0912 23:56:35.189255 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e0c64f6a-d5cd-4c90-b7b6-77627d265c99-goldmane-key-pair\") pod \"goldmane-54d579b49d-6mb7r\" (UID: \"e0c64f6a-d5cd-4c90-b7b6-77627d265c99\") " pod="calico-system/goldmane-54d579b49d-6mb7r" Sep 12 23:56:35.189478 kubelet[3403]: I0912 23:56:35.189450 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdcda89e-ca4a-4a78-8c84-d924c20fa355-tigera-ca-bundle\") pod \"calico-kube-controllers-5bc58cd45c-f245z\" (UID: \"cdcda89e-ca4a-4a78-8c84-d924c20fa355\") " pod="calico-system/calico-kube-controllers-5bc58cd45c-f245z" Sep 12 23:56:35.190575 kubelet[3403]: I0912 23:56:35.189622 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6wtd\" (UniqueName: \"kubernetes.io/projected/cdcda89e-ca4a-4a78-8c84-d924c20fa355-kube-api-access-b6wtd\") pod \"calico-kube-controllers-5bc58cd45c-f245z\" (UID: \"cdcda89e-ca4a-4a78-8c84-d924c20fa355\") " pod="calico-system/calico-kube-controllers-5bc58cd45c-f245z" Sep 12 23:56:35.190575 kubelet[3403]: I0912 23:56:35.189675 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrk6r\" (UniqueName: \"kubernetes.io/projected/6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e-kube-api-access-hrk6r\") pod \"coredns-668d6bf9bc-vm44c\" (UID: \"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e\") " pod="kube-system/coredns-668d6bf9bc-vm44c" Sep 12 23:56:35.190575 kubelet[3403]: I0912 23:56:35.189711 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1a3fe73-b569-44ae-8604-d1dae69d130b-config-volume\") pod \"coredns-668d6bf9bc-xpl6r\" (UID: \"c1a3fe73-b569-44ae-8604-d1dae69d130b\") " pod="kube-system/coredns-668d6bf9bc-xpl6r" Sep 12 23:56:35.190575 kubelet[3403]: I0912 23:56:35.189750 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnm8\" (UniqueName: \"kubernetes.io/projected/c1a3fe73-b569-44ae-8604-d1dae69d130b-kube-api-access-wjnm8\") pod \"coredns-668d6bf9bc-xpl6r\" (UID: \"c1a3fe73-b569-44ae-8604-d1dae69d130b\") " pod="kube-system/coredns-668d6bf9bc-xpl6r" Sep 12 23:56:35.190575 kubelet[3403]: I0912 23:56:35.189813 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c64f6a-d5cd-4c90-b7b6-77627d265c99-config\") pod \"goldmane-54d579b49d-6mb7r\" (UID: \"e0c64f6a-d5cd-4c90-b7b6-77627d265c99\") " pod="calico-system/goldmane-54d579b49d-6mb7r" Sep 12 23:56:35.190938 kubelet[3403]: I0912 23:56:35.189848 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0c64f6a-d5cd-4c90-b7b6-77627d265c99-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-6mb7r\" (UID: \"e0c64f6a-d5cd-4c90-b7b6-77627d265c99\") " pod="calico-system/goldmane-54d579b49d-6mb7r" Sep 12 23:56:35.190938 kubelet[3403]: I0912 23:56:35.189899 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/64bd2045-89ff-4c72-8c98-7616c71dee49-calico-apiserver-certs\") pod \"calico-apiserver-88f4475-t5fsb\" (UID: \"64bd2045-89ff-4c72-8c98-7616c71dee49\") " pod="calico-apiserver/calico-apiserver-88f4475-t5fsb" Sep 12 23:56:35.190938 kubelet[3403]: I0912 23:56:35.189936 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eeb53fe4-c09c-4147-b78d-113cabffc7ee-calico-apiserver-certs\") pod \"calico-apiserver-88f4475-4dq8d\" (UID: \"eeb53fe4-c09c-4147-b78d-113cabffc7ee\") " pod="calico-apiserver/calico-apiserver-88f4475-4dq8d" Sep 12 23:56:35.190938 kubelet[3403]: I0912 23:56:35.189977 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hfq4\" (UniqueName: \"kubernetes.io/projected/eeb53fe4-c09c-4147-b78d-113cabffc7ee-kube-api-access-2hfq4\") pod \"calico-apiserver-88f4475-4dq8d\" (UID: \"eeb53fe4-c09c-4147-b78d-113cabffc7ee\") " pod="calico-apiserver/calico-apiserver-88f4475-4dq8d" Sep 12 23:56:35.218722 systemd[1]: Created slice kubepods-besteffort-podcdcda89e_ca4a_4a78_8c84_d924c20fa355.slice - libcontainer container kubepods-besteffort-podcdcda89e_ca4a_4a78_8c84_d924c20fa355.slice. Sep 12 23:56:35.258986 systemd[1]: Created slice kubepods-besteffort-pod64bd2045_89ff_4c72_8c98_7616c71dee49.slice - libcontainer container kubepods-besteffort-pod64bd2045_89ff_4c72_8c98_7616c71dee49.slice. Sep 12 23:56:35.374918 containerd[2027]: time="2025-09-12T23:56:35.373649046Z" level=info msg="shim disconnected" id=c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00 namespace=k8s.io Sep 12 23:56:35.374918 containerd[2027]: time="2025-09-12T23:56:35.373780902Z" level=warning msg="cleaning up after shim disconnected" id=c00e14595b6c72f4a56f8cef74a663bbd60a2b62c7fb3e42830cf0c3e61e5d00 namespace=k8s.io Sep 12 23:56:35.374918 containerd[2027]: time="2025-09-12T23:56:35.373803858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:35.411619 containerd[2027]: time="2025-09-12T23:56:35.411082302Z" level=warning msg="cleanup warnings time=\"2025-09-12T23:56:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 23:56:35.433447 containerd[2027]: time="2025-09-12T23:56:35.433396662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b884ffb96-z8ck8,Uid:bc37ffcb-5047-4cd5-a85c-ffd3b1486c49,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:35.471426 containerd[2027]: time="2025-09-12T23:56:35.470536326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6mb7r,Uid:e0c64f6a-d5cd-4c90-b7b6-77627d265c99,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:35.487484 containerd[2027]: time="2025-09-12T23:56:35.487141134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-4dq8d,Uid:eeb53fe4-c09c-4147-b78d-113cabffc7ee,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:56:35.489159 systemd[1]: Created slice kubepods-besteffort-podf60d9f74_1f96_4c24_9612_25c7fd4febe8.slice - libcontainer container kubepods-besteffort-podf60d9f74_1f96_4c24_9612_25c7fd4febe8.slice. Sep 12 23:56:35.500654 containerd[2027]: time="2025-09-12T23:56:35.500533771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zn6sm,Uid:f60d9f74-1f96-4c24-9612-25c7fd4febe8,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:35.553357 containerd[2027]: time="2025-09-12T23:56:35.553298719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bc58cd45c-f245z,Uid:cdcda89e-ca4a-4a78-8c84-d924c20fa355,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:35.573387 containerd[2027]: time="2025-09-12T23:56:35.572616331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-t5fsb,Uid:64bd2045-89ff-4c72-8c98-7616c71dee49,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:56:35.754785 containerd[2027]: time="2025-09-12T23:56:35.754635236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:56:35.849096 containerd[2027]: time="2025-09-12T23:56:35.848949752Z" level=error msg="Failed to destroy network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.850199 containerd[2027]: time="2025-09-12T23:56:35.850099076Z" level=error msg="encountered an error cleaning up failed sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.850356 containerd[2027]: time="2025-09-12T23:56:35.850254200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b884ffb96-z8ck8,Uid:bc37ffcb-5047-4cd5-a85c-ffd3b1486c49,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.852303 kubelet[3403]: E0912 23:56:35.851704 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.852303 kubelet[3403]: E0912 23:56:35.851812 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b884ffb96-z8ck8" Sep 12 23:56:35.852303 kubelet[3403]: E0912 23:56:35.851850 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b884ffb96-z8ck8" Sep 12 23:56:35.852600 kubelet[3403]: E0912 23:56:35.851943 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b884ffb96-z8ck8_calico-system(bc37ffcb-5047-4cd5-a85c-ffd3b1486c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b884ffb96-z8ck8_calico-system(bc37ffcb-5047-4cd5-a85c-ffd3b1486c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b884ffb96-z8ck8" podUID="bc37ffcb-5047-4cd5-a85c-ffd3b1486c49" Sep 12 23:56:35.894044 containerd[2027]: time="2025-09-12T23:56:35.893976741Z" level=error msg="Failed to destroy network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.904663 containerd[2027]: time="2025-09-12T23:56:35.900703785Z" level=error msg="encountered an error cleaning up failed sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.916044 containerd[2027]: time="2025-09-12T23:56:35.915936897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zn6sm,Uid:f60d9f74-1f96-4c24-9612-25c7fd4febe8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.921574 kubelet[3403]: E0912 23:56:35.916523 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.921574 kubelet[3403]: E0912 23:56:35.916651 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zn6sm" Sep 12 23:56:35.921574 kubelet[3403]: E0912 23:56:35.916704 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zn6sm" Sep 12 23:56:35.922281 kubelet[3403]: E0912 23:56:35.916778 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zn6sm_calico-system(f60d9f74-1f96-4c24-9612-25c7fd4febe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zn6sm_calico-system(f60d9f74-1f96-4c24-9612-25c7fd4febe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zn6sm" podUID="f60d9f74-1f96-4c24-9612-25c7fd4febe8" Sep 12 23:56:35.927411 containerd[2027]: time="2025-09-12T23:56:35.926653905Z" level=error msg="Failed to destroy network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.927411 containerd[2027]: time="2025-09-12T23:56:35.927187461Z" level=error msg="encountered an error cleaning up failed sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.927411 containerd[2027]: time="2025-09-12T23:56:35.927259461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-4dq8d,Uid:eeb53fe4-c09c-4147-b78d-113cabffc7ee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.929863 kubelet[3403]: E0912 23:56:35.927649 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.929863 kubelet[3403]: E0912 23:56:35.927742 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88f4475-4dq8d" Sep 12 23:56:35.929863 kubelet[3403]: E0912 23:56:35.927778 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88f4475-4dq8d" Sep 12 23:56:35.930146 kubelet[3403]: E0912 23:56:35.927865 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88f4475-4dq8d_calico-apiserver(eeb53fe4-c09c-4147-b78d-113cabffc7ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88f4475-4dq8d_calico-apiserver(eeb53fe4-c09c-4147-b78d-113cabffc7ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-88f4475-4dq8d" podUID="eeb53fe4-c09c-4147-b78d-113cabffc7ee" Sep 12 23:56:35.955058 containerd[2027]: time="2025-09-12T23:56:35.947685093Z" level=error msg="Failed to destroy network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.957048 containerd[2027]: time="2025-09-12T23:56:35.956981781Z" level=error msg="encountered an error cleaning up failed sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.957288 containerd[2027]: time="2025-09-12T23:56:35.957241113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6mb7r,Uid:e0c64f6a-d5cd-4c90-b7b6-77627d265c99,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.958804 kubelet[3403]: E0912 23:56:35.957898 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.958804 kubelet[3403]: E0912 23:56:35.957991 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6mb7r" Sep 12 23:56:35.958804 kubelet[3403]: E0912 23:56:35.958025 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6mb7r" Sep 12 23:56:35.959078 kubelet[3403]: E0912 23:56:35.958108 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-6mb7r_calico-system(e0c64f6a-d5cd-4c90-b7b6-77627d265c99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-6mb7r_calico-system(e0c64f6a-d5cd-4c90-b7b6-77627d265c99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-6mb7r" podUID="e0c64f6a-d5cd-4c90-b7b6-77627d265c99" Sep 12 23:56:35.964807 containerd[2027]: time="2025-09-12T23:56:35.964722381Z" level=error msg="Failed to destroy network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.967935 containerd[2027]: time="2025-09-12T23:56:35.967871469Z" level=error msg="encountered an error cleaning up failed sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.968175 containerd[2027]: time="2025-09-12T23:56:35.968132145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-t5fsb,Uid:64bd2045-89ff-4c72-8c98-7616c71dee49,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.970568 kubelet[3403]: E0912 23:56:35.968601 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.970568 kubelet[3403]: E0912 23:56:35.968699 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88f4475-t5fsb" Sep 12 23:56:35.970568 kubelet[3403]: E0912 23:56:35.968734 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-88f4475-t5fsb" Sep 12 23:56:35.970854 kubelet[3403]: E0912 23:56:35.968822 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-88f4475-t5fsb_calico-apiserver(64bd2045-89ff-4c72-8c98-7616c71dee49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-88f4475-t5fsb_calico-apiserver(64bd2045-89ff-4c72-8c98-7616c71dee49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-88f4475-t5fsb" podUID="64bd2045-89ff-4c72-8c98-7616c71dee49" Sep 12 23:56:35.985409 containerd[2027]: time="2025-09-12T23:56:35.985223061Z" level=error msg="Failed to destroy network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.989583 containerd[2027]: time="2025-09-12T23:56:35.987001017Z" level=error msg="encountered an error cleaning up failed sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.989583 containerd[2027]: time="2025-09-12T23:56:35.987095565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bc58cd45c-f245z,Uid:cdcda89e-ca4a-4a78-8c84-d924c20fa355,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.990510 kubelet[3403]: E0912 23:56:35.990022 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:35.990510 kubelet[3403]: E0912 23:56:35.990103 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bc58cd45c-f245z" Sep 12 23:56:35.990510 kubelet[3403]: E0912 23:56:35.990136 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bc58cd45c-f245z" Sep 12 23:56:35.990802 kubelet[3403]: E0912 23:56:35.990201 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bc58cd45c-f245z_calico-system(cdcda89e-ca4a-4a78-8c84-d924c20fa355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bc58cd45c-f245z_calico-system(cdcda89e-ca4a-4a78-8c84-d924c20fa355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bc58cd45c-f245z" podUID="cdcda89e-ca4a-4a78-8c84-d924c20fa355" Sep 12 23:56:36.001125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65-shm.mount: Deactivated successfully. Sep 12 23:56:36.001332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7-shm.mount: Deactivated successfully. Sep 12 23:56:36.001493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34-shm.mount: Deactivated successfully. Sep 12 23:56:36.001687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83-shm.mount: Deactivated successfully. Sep 12 23:56:36.284462 containerd[2027]: time="2025-09-12T23:56:36.283826766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm44c,Uid:6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e,Namespace:kube-system,Attempt:0,}" Sep 12 23:56:36.319474 containerd[2027]: time="2025-09-12T23:56:36.318013783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpl6r,Uid:c1a3fe73-b569-44ae-8604-d1dae69d130b,Namespace:kube-system,Attempt:0,}" Sep 12 23:56:36.433247 containerd[2027]: time="2025-09-12T23:56:36.433075027Z" level=error msg="Failed to destroy network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.434926 containerd[2027]: time="2025-09-12T23:56:36.434757151Z" level=error msg="encountered an error cleaning up failed sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.435390 containerd[2027]: time="2025-09-12T23:56:36.435226459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm44c,Uid:6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.435984 kubelet[3403]: E0912 23:56:36.435934 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.436754 kubelet[3403]: E0912 23:56:36.436642 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vm44c" Sep 12 23:56:36.437063 kubelet[3403]: E0912 23:56:36.436899 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vm44c" Sep 12 23:56:36.437063 kubelet[3403]: E0912 23:56:36.437006 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vm44c_kube-system(6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vm44c_kube-system(6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vm44c" podUID="6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e" Sep 12 23:56:36.449604 containerd[2027]: time="2025-09-12T23:56:36.449485939Z" level=error msg="Failed to destroy network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.450157 containerd[2027]: time="2025-09-12T23:56:36.450102715Z" level=error msg="encountered an error cleaning up failed sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.450250 containerd[2027]: time="2025-09-12T23:56:36.450188815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpl6r,Uid:c1a3fe73-b569-44ae-8604-d1dae69d130b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.450528 kubelet[3403]: E0912 23:56:36.450476 3403 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.450645 kubelet[3403]: E0912 23:56:36.450603 3403 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xpl6r" Sep 12 23:56:36.450736 kubelet[3403]: E0912 23:56:36.450643 3403 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xpl6r" Sep 12 23:56:36.450736 kubelet[3403]: E0912 23:56:36.450706 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xpl6r_kube-system(c1a3fe73-b569-44ae-8604-d1dae69d130b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xpl6r_kube-system(c1a3fe73-b569-44ae-8604-d1dae69d130b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xpl6r" podUID="c1a3fe73-b569-44ae-8604-d1dae69d130b" Sep 12 23:56:36.758173 kubelet[3403]: I0912 23:56:36.757300 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:36.760369 containerd[2027]: time="2025-09-12T23:56:36.760152153Z" level=info msg="StopPodSandbox for \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\"" Sep 12 23:56:36.760903 containerd[2027]: time="2025-09-12T23:56:36.760455525Z" level=info msg="Ensure that sandbox cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7 in task-service has been cleanup successfully" Sep 12 23:56:36.762944 kubelet[3403]: I0912 23:56:36.762771 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:56:36.765431 containerd[2027]: time="2025-09-12T23:56:36.764745777Z" level=info msg="StopPodSandbox for \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\"" Sep 12 23:56:36.765431 containerd[2027]: time="2025-09-12T23:56:36.765055977Z" level=info msg="Ensure that sandbox 05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5 in task-service has been cleanup successfully" Sep 12 23:56:36.767603 kubelet[3403]: I0912 23:56:36.767371 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:56:36.771424 containerd[2027]: time="2025-09-12T23:56:36.771347805Z" level=info msg="StopPodSandbox for \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\"" Sep 12 23:56:36.773110 containerd[2027]: time="2025-09-12T23:56:36.773026857Z" level=info msg="Ensure that sandbox fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83 in task-service has been cleanup successfully" Sep 12 23:56:36.781879 kubelet[3403]: I0912 23:56:36.781304 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:56:36.785135 containerd[2027]: time="2025-09-12T23:56:36.785056605Z" level=info msg="StopPodSandbox for \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\"" Sep 12 23:56:36.787590 containerd[2027]: time="2025-09-12T23:56:36.787259073Z" level=info msg="Ensure that sandbox 973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34 in task-service has been cleanup successfully" Sep 12 23:56:36.791378 kubelet[3403]: I0912 23:56:36.788268 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:36.793385 containerd[2027]: time="2025-09-12T23:56:36.793145373Z" level=info msg="StopPodSandbox for \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\"" Sep 12 23:56:36.793978 containerd[2027]: time="2025-09-12T23:56:36.793898385Z" level=info msg="Ensure that sandbox d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6 in task-service has been cleanup successfully" Sep 12 23:56:36.824249 kubelet[3403]: I0912 23:56:36.824177 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:36.828310 containerd[2027]: time="2025-09-12T23:56:36.828229989Z" level=info msg="StopPodSandbox for \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\"" Sep 12 23:56:36.828639 containerd[2027]: time="2025-09-12T23:56:36.828586293Z" level=info msg="Ensure that sandbox bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965 in task-service has been cleanup successfully" Sep 12 23:56:36.851336 kubelet[3403]: I0912 23:56:36.851264 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:56:36.859824 containerd[2027]: time="2025-09-12T23:56:36.859632117Z" level=info msg="StopPodSandbox for \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\"" Sep 12 23:56:36.863166 containerd[2027]: time="2025-09-12T23:56:36.862738053Z" level=info msg="Ensure that sandbox 2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05 in task-service has been cleanup successfully" Sep 12 23:56:36.869678 kubelet[3403]: I0912 23:56:36.869622 3403 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:36.873824 containerd[2027]: time="2025-09-12T23:56:36.873757497Z" level=info msg="StopPodSandbox for \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\"" Sep 12 23:56:36.874090 containerd[2027]: time="2025-09-12T23:56:36.874047297Z" level=info msg="Ensure that sandbox b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65 in task-service has been cleanup successfully" Sep 12 23:56:36.908381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05-shm.mount: Deactivated successfully. Sep 12 23:56:36.908615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965-shm.mount: Deactivated successfully. Sep 12 23:56:36.962570 containerd[2027]: time="2025-09-12T23:56:36.961768066Z" level=error msg="StopPodSandbox for \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\" failed" error="failed to destroy network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:36.963112 kubelet[3403]: E0912 23:56:36.962064 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:56:36.963112 kubelet[3403]: E0912 23:56:36.962147 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34"} Sep 12 23:56:36.963112 kubelet[3403]: E0912 23:56:36.962241 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0c64f6a-d5cd-4c90-b7b6-77627d265c99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:36.963112 kubelet[3403]: E0912 23:56:36.962282 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0c64f6a-d5cd-4c90-b7b6-77627d265c99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-6mb7r" podUID="e0c64f6a-d5cd-4c90-b7b6-77627d265c99" Sep 12 23:56:37.011670 containerd[2027]: time="2025-09-12T23:56:37.011582610Z" level=error msg="StopPodSandbox for \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\" failed" error="failed to destroy network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:37.012288 kubelet[3403]: E0912 23:56:37.012217 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:37.012401 kubelet[3403]: E0912 23:56:37.012298 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65"} Sep 12 23:56:37.012401 kubelet[3403]: E0912 23:56:37.012357 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64bd2045-89ff-4c72-8c98-7616c71dee49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:37.013689 kubelet[3403]: E0912 23:56:37.012395 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64bd2045-89ff-4c72-8c98-7616c71dee49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-88f4475-t5fsb" podUID="64bd2045-89ff-4c72-8c98-7616c71dee49" Sep 12 23:56:37.077832 containerd[2027]: time="2025-09-12T23:56:37.077765238Z" level=error msg="StopPodSandbox for \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\" failed" error="failed to destroy network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:37.078345 kubelet[3403]: E0912 23:56:37.078281 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:37.078482 kubelet[3403]: E0912 23:56:37.078357 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7"} Sep 12 23:56:37.078482 kubelet[3403]: E0912 23:56:37.078418 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdcda89e-ca4a-4a78-8c84-d924c20fa355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:37.078482 kubelet[3403]: E0912 23:56:37.078456 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdcda89e-ca4a-4a78-8c84-d924c20fa355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bc58cd45c-f245z" podUID="cdcda89e-ca4a-4a78-8c84-d924c20fa355" Sep 12 23:56:37.102740 containerd[2027]: time="2025-09-12T23:56:37.102622183Z" level=error msg="StopPodSandbox for \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\" failed" error="failed to destroy network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:37.103668 kubelet[3403]: E0912 23:56:37.103593 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:56:37.103856 kubelet[3403]: E0912 23:56:37.103674 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5"} Sep 12 23:56:37.103856 kubelet[3403]: E0912 23:56:37.103736 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f60d9f74-1f96-4c24-9612-25c7fd4febe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:37.103856 kubelet[3403]: E0912 23:56:37.103776 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f60d9f74-1f96-4c24-9612-25c7fd4febe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zn6sm" podUID="f60d9f74-1f96-4c24-9612-25c7fd4febe8" Sep 12 23:56:37.104280 containerd[2027]: time="2025-09-12T23:56:37.104066515Z" level=error msg="StopPodSandbox for \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\" failed" error="failed to destroy network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:37.106652 kubelet[3403]: E0912 23:56:37.106319 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:37.106788 kubelet[3403]: E0912 23:56:37.106667 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6"} Sep 12 23:56:37.106788 kubelet[3403]: E0912 23:56:37.106750 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:37.106999 kubelet[3403]: E0912 23:56:37.106814 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b884ffb96-z8ck8" podUID="bc37ffcb-5047-4cd5-a85c-ffd3b1486c49" Sep 12 23:56:37.119137 containerd[2027]: time="2025-09-12T23:56:37.117590467Z" level=error msg="StopPodSandbox for \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\" failed" error="failed to destroy network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:37.119270 kubelet[3403]: E0912 23:56:37.117869 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:56:37.119270 kubelet[3403]: E0912 23:56:37.117937 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05"} Sep 12 23:56:37.119270 kubelet[3403]: E0912 23:56:37.117989 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1a3fe73-b569-44ae-8604-d1dae69d130b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:37.119270 kubelet[3403]: E0912 23:56:37.118037 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1a3fe73-b569-44ae-8604-d1dae69d130b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xpl6r" podUID="c1a3fe73-b569-44ae-8604-d1dae69d130b" Sep 12 23:56:37.120866 containerd[2027]: time="2025-09-12T23:56:37.120803827Z" level=error msg="StopPodSandbox for \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\" failed" error="failed to destroy network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:37.121426 kubelet[3403]: E0912 23:56:37.121352 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:37.121710 kubelet[3403]: E0912 23:56:37.121656 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965"} Sep 12 23:56:37.121839 kubelet[3403]: E0912 23:56:37.121746 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:37.121839 kubelet[3403]: E0912 23:56:37.121813 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vm44c" podUID="6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e" Sep 12 23:56:37.124183 containerd[2027]: time="2025-09-12T23:56:37.124108135Z" level=error msg="StopPodSandbox for \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\" failed" error="failed to destroy network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:56:37.124473 kubelet[3403]: E0912 23:56:37.124393 3403 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:56:37.124874 kubelet[3403]: E0912 23:56:37.124475 3403 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83"} Sep 12 23:56:37.124874 kubelet[3403]: E0912 23:56:37.124528 3403 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eeb53fe4-c09c-4147-b78d-113cabffc7ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:56:37.124874 kubelet[3403]: E0912 23:56:37.124615 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eeb53fe4-c09c-4147-b78d-113cabffc7ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-88f4475-4dq8d" podUID="eeb53fe4-c09c-4147-b78d-113cabffc7ee" Sep 12 23:56:42.191136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893624232.mount: Deactivated successfully. Sep 12 23:56:42.253229 containerd[2027]: time="2025-09-12T23:56:42.251788800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:42.255918 containerd[2027]: time="2025-09-12T23:56:42.255854640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 23:56:42.258509 containerd[2027]: time="2025-09-12T23:56:42.258430428Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:42.263348 containerd[2027]: time="2025-09-12T23:56:42.263262732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:42.265768 containerd[2027]: time="2025-09-12T23:56:42.264713700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.509521892s" Sep 12 23:56:42.265768 containerd[2027]: time="2025-09-12T23:56:42.264778944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 23:56:42.306104 containerd[2027]: time="2025-09-12T23:56:42.306016440Z" level=info msg="CreateContainer within sandbox \"a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:56:42.343006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299973727.mount: Deactivated successfully. Sep 12 23:56:42.346970 containerd[2027]: time="2025-09-12T23:56:42.346881277Z" level=info msg="CreateContainer within sandbox \"a063701be077aa3d99bb07dd89ef4def063eafb7a99196ee86519aba69ca5fab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4441eb05d11fa075e225f2c4ce35af7ac223c0153f63f2e62316b88429278f9b\"" Sep 12 23:56:42.349670 containerd[2027]: time="2025-09-12T23:56:42.349604665Z" level=info msg="StartContainer for \"4441eb05d11fa075e225f2c4ce35af7ac223c0153f63f2e62316b88429278f9b\"" Sep 12 23:56:42.401877 systemd[1]: Started cri-containerd-4441eb05d11fa075e225f2c4ce35af7ac223c0153f63f2e62316b88429278f9b.scope - libcontainer container 4441eb05d11fa075e225f2c4ce35af7ac223c0153f63f2e62316b88429278f9b. Sep 12 23:56:42.466741 containerd[2027]: time="2025-09-12T23:56:42.466589197Z" level=info msg="StartContainer for \"4441eb05d11fa075e225f2c4ce35af7ac223c0153f63f2e62316b88429278f9b\" returns successfully" Sep 12 23:56:42.735616 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:56:42.735777 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:56:43.037596 kubelet[3403]: I0912 23:56:43.036864 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8tcct" podStartSLOduration=1.52648135 podStartE2EDuration="18.036832428s" podCreationTimestamp="2025-09-12 23:56:25 +0000 UTC" firstStartedPulling="2025-09-12 23:56:25.75584959 +0000 UTC m=+29.585453968" lastFinishedPulling="2025-09-12 23:56:42.266200668 +0000 UTC m=+46.095805046" observedRunningTime="2025-09-12 23:56:42.965043604 +0000 UTC m=+46.794648006" watchObservedRunningTime="2025-09-12 23:56:43.036832428 +0000 UTC m=+46.866436794" Sep 12 23:56:43.040190 containerd[2027]: time="2025-09-12T23:56:43.039411048Z" level=info msg="StopPodSandbox for \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\"" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.280 [INFO][4657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.280 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" iface="eth0" netns="/var/run/netns/cni-23292abb-1148-8820-2a57-2710933ef4ea" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.283 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" iface="eth0" netns="/var/run/netns/cni-23292abb-1148-8820-2a57-2710933ef4ea" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.286 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" iface="eth0" netns="/var/run/netns/cni-23292abb-1148-8820-2a57-2710933ef4ea" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.287 [INFO][4657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.287 [INFO][4657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.406 [INFO][4670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.406 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.406 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.421 [WARNING][4670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.421 [INFO][4670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.424 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:43.434818 containerd[2027]: 2025-09-12 23:56:43.431 [INFO][4657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:43.439691 containerd[2027]: time="2025-09-12T23:56:43.434991974Z" level=info msg="TearDown network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\" successfully" Sep 12 23:56:43.439691 containerd[2027]: time="2025-09-12T23:56:43.435056114Z" level=info msg="StopPodSandbox for \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\" returns successfully" Sep 12 23:56:43.445657 systemd[1]: run-netns-cni\x2d23292abb\x2d1148\x2d8820\x2d2a57\x2d2710933ef4ea.mount: Deactivated successfully. Sep 12 23:56:43.567594 kubelet[3403]: I0912 23:56:43.567508 3403 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-ca-bundle\") pod \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\" (UID: \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\") " Sep 12 23:56:43.567828 kubelet[3403]: I0912 23:56:43.567625 3403 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd4xb\" (UniqueName: \"kubernetes.io/projected/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-kube-api-access-rd4xb\") pod \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\" (UID: \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\") " Sep 12 23:56:43.567828 kubelet[3403]: I0912 23:56:43.567673 3403 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-backend-key-pair\") pod \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\" (UID: \"bc37ffcb-5047-4cd5-a85c-ffd3b1486c49\") " Sep 12 23:56:43.569972 kubelet[3403]: I0912 23:56:43.568968 3403 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bc37ffcb-5047-4cd5-a85c-ffd3b1486c49" (UID: "bc37ffcb-5047-4cd5-a85c-ffd3b1486c49"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:56:43.585536 systemd[1]: var-lib-kubelet-pods-bc37ffcb\x2d5047\x2d4cd5\x2da85c\x2dffd3b1486c49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drd4xb.mount: Deactivated successfully. Sep 12 23:56:43.585812 systemd[1]: var-lib-kubelet-pods-bc37ffcb\x2d5047\x2d4cd5\x2da85c\x2dffd3b1486c49-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 23:56:43.589125 kubelet[3403]: I0912 23:56:43.588896 3403 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-kube-api-access-rd4xb" (OuterVolumeSpecName: "kube-api-access-rd4xb") pod "bc37ffcb-5047-4cd5-a85c-ffd3b1486c49" (UID: "bc37ffcb-5047-4cd5-a85c-ffd3b1486c49"). InnerVolumeSpecName "kube-api-access-rd4xb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:56:43.589125 kubelet[3403]: I0912 23:56:43.589075 3403 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bc37ffcb-5047-4cd5-a85c-ffd3b1486c49" (UID: "bc37ffcb-5047-4cd5-a85c-ffd3b1486c49"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:56:43.669870 kubelet[3403]: I0912 23:56:43.669811 3403 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-ca-bundle\") on node \"ip-172-31-20-162\" DevicePath \"\"" Sep 12 23:56:43.670037 kubelet[3403]: I0912 23:56:43.669887 3403 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rd4xb\" (UniqueName: \"kubernetes.io/projected/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-kube-api-access-rd4xb\") on node \"ip-172-31-20-162\" DevicePath \"\"" Sep 12 23:56:43.670037 kubelet[3403]: I0912 23:56:43.669914 3403 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49-whisker-backend-key-pair\") on node \"ip-172-31-20-162\" DevicePath \"\"" Sep 12 23:56:43.923734 systemd[1]: Removed slice kubepods-besteffort-podbc37ffcb_5047_4cd5_a85c_ffd3b1486c49.slice - libcontainer container kubepods-besteffort-podbc37ffcb_5047_4cd5_a85c_ffd3b1486c49.slice. Sep 12 23:56:44.043480 systemd[1]: Created slice kubepods-besteffort-pod509f5fc6_5228_4c1a_bd4d_dd91d1986981.slice - libcontainer container kubepods-besteffort-pod509f5fc6_5228_4c1a_bd4d_dd91d1986981.slice. Sep 12 23:56:44.072435 kubelet[3403]: I0912 23:56:44.072270 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcbh\" (UniqueName: \"kubernetes.io/projected/509f5fc6-5228-4c1a-bd4d-dd91d1986981-kube-api-access-mjcbh\") pod \"whisker-7cfd7d6fcf-p4xpc\" (UID: \"509f5fc6-5228-4c1a-bd4d-dd91d1986981\") " pod="calico-system/whisker-7cfd7d6fcf-p4xpc" Sep 12 23:56:44.072435 kubelet[3403]: I0912 23:56:44.072351 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/509f5fc6-5228-4c1a-bd4d-dd91d1986981-whisker-backend-key-pair\") pod \"whisker-7cfd7d6fcf-p4xpc\" (UID: \"509f5fc6-5228-4c1a-bd4d-dd91d1986981\") " pod="calico-system/whisker-7cfd7d6fcf-p4xpc" Sep 12 23:56:44.072435 kubelet[3403]: I0912 23:56:44.072397 3403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/509f5fc6-5228-4c1a-bd4d-dd91d1986981-whisker-ca-bundle\") pod \"whisker-7cfd7d6fcf-p4xpc\" (UID: \"509f5fc6-5228-4c1a-bd4d-dd91d1986981\") " pod="calico-system/whisker-7cfd7d6fcf-p4xpc" Sep 12 23:56:44.356717 containerd[2027]: time="2025-09-12T23:56:44.356643819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cfd7d6fcf-p4xpc,Uid:509f5fc6-5228-4c1a-bd4d-dd91d1986981,Namespace:calico-system,Attempt:0,}" Sep 12 23:56:44.468420 kubelet[3403]: I0912 23:56:44.467951 3403 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc37ffcb-5047-4cd5-a85c-ffd3b1486c49" path="/var/lib/kubelet/pods/bc37ffcb-5047-4cd5-a85c-ffd3b1486c49/volumes" Sep 12 23:56:44.600019 (udev-worker)[4622]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:44.605125 systemd-networkd[1851]: cali84c5c90bab2: Link UP Sep 12 23:56:44.606582 systemd-networkd[1851]: cali84c5c90bab2: Gained carrier Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.428 [INFO][4716] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.452 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0 whisker-7cfd7d6fcf- calico-system 509f5fc6-5228-4c1a-bd4d-dd91d1986981 925 0 2025-09-12 23:56:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cfd7d6fcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-162 whisker-7cfd7d6fcf-p4xpc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali84c5c90bab2 [] [] }} ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.452 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.513 [INFO][4727] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" HandleID="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Workload="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.513 [INFO][4727] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" HandleID="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Workload="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c220), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-162", "pod":"whisker-7cfd7d6fcf-p4xpc", "timestamp":"2025-09-12 23:56:44.513710571 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.514 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.514 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.514 [INFO][4727] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.530 [INFO][4727] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.539 [INFO][4727] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.547 [INFO][4727] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.551 [INFO][4727] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.556 [INFO][4727] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.556 [INFO][4727] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.560 [INFO][4727] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7 Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.573 [INFO][4727] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.584 [INFO][4727] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.65/26] block=192.168.55.64/26 handle="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.584 [INFO][4727] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.65/26] handle="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" host="ip-172-31-20-162" Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.584 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:44.636385 containerd[2027]: 2025-09-12 23:56:44.584 [INFO][4727] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.65/26] IPv6=[] ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" HandleID="k8s-pod-network.835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Workload="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" Sep 12 23:56:44.638986 containerd[2027]: 2025-09-12 23:56:44.588 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0", GenerateName:"whisker-7cfd7d6fcf-", Namespace:"calico-system", SelfLink:"", UID:"509f5fc6-5228-4c1a-bd4d-dd91d1986981", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cfd7d6fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"whisker-7cfd7d6fcf-p4xpc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84c5c90bab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:44.638986 containerd[2027]: 2025-09-12 23:56:44.588 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.65/32] ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" Sep 12 23:56:44.638986 containerd[2027]: 2025-09-12 23:56:44.588 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84c5c90bab2 ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" Sep 12 23:56:44.638986 containerd[2027]: 2025-09-12 23:56:44.608 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" Sep 12 23:56:44.638986 containerd[2027]: 2025-09-12 23:56:44.608 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0", GenerateName:"whisker-7cfd7d6fcf-", Namespace:"calico-system", SelfLink:"", UID:"509f5fc6-5228-4c1a-bd4d-dd91d1986981", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cfd7d6fcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7", Pod:"whisker-7cfd7d6fcf-p4xpc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84c5c90bab2", MAC:"8e:bd:5f:dd:37:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:44.638986 containerd[2027]: 2025-09-12 23:56:44.632 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7" Namespace="calico-system" Pod="whisker-7cfd7d6fcf-p4xpc" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--7cfd7d6fcf--p4xpc-eth0" Sep 12 23:56:44.668168 containerd[2027]: time="2025-09-12T23:56:44.667666900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:44.668168 containerd[2027]: time="2025-09-12T23:56:44.667766620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:44.668168 containerd[2027]: time="2025-09-12T23:56:44.667803124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:44.668168 containerd[2027]: time="2025-09-12T23:56:44.667980388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:44.735526 systemd[1]: Started cri-containerd-835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7.scope - libcontainer container 835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7. Sep 12 23:56:44.875781 containerd[2027]: time="2025-09-12T23:56:44.874684241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cfd7d6fcf-p4xpc,Uid:509f5fc6-5228-4c1a-bd4d-dd91d1986981,Namespace:calico-system,Attempt:0,} returns sandbox id \"835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7\"" Sep 12 23:56:44.881485 containerd[2027]: time="2025-09-12T23:56:44.881404757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 23:56:45.911832 systemd-networkd[1851]: cali84c5c90bab2: Gained IPv6LL Sep 12 23:56:46.163650 containerd[2027]: time="2025-09-12T23:56:46.161820952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:46.164469 containerd[2027]: time="2025-09-12T23:56:46.164417080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 23:56:46.166077 containerd[2027]: time="2025-09-12T23:56:46.166026712Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:46.171266 containerd[2027]: time="2025-09-12T23:56:46.171177628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:46.174318 containerd[2027]: time="2025-09-12T23:56:46.174116668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.292506075s" Sep 12 23:56:46.174318 containerd[2027]: time="2025-09-12T23:56:46.174182512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 23:56:46.180788 containerd[2027]: time="2025-09-12T23:56:46.180701512Z" level=info msg="CreateContainer within sandbox \"835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 23:56:46.211284 containerd[2027]: time="2025-09-12T23:56:46.210882784Z" level=info msg="CreateContainer within sandbox \"835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ee86f17855c9fb2ad491e84c8ec2766b9b868447c21186a796e7f90fdf9e5502\"" Sep 12 23:56:46.214754 containerd[2027]: time="2025-09-12T23:56:46.214657744Z" level=info msg="StartContainer for \"ee86f17855c9fb2ad491e84c8ec2766b9b868447c21186a796e7f90fdf9e5502\"" Sep 12 23:56:46.291818 systemd[1]: run-containerd-runc-k8s.io-ee86f17855c9fb2ad491e84c8ec2766b9b868447c21186a796e7f90fdf9e5502-runc.p7Dx0F.mount: Deactivated successfully. Sep 12 23:56:46.303926 systemd[1]: Started cri-containerd-ee86f17855c9fb2ad491e84c8ec2766b9b868447c21186a796e7f90fdf9e5502.scope - libcontainer container ee86f17855c9fb2ad491e84c8ec2766b9b868447c21186a796e7f90fdf9e5502. Sep 12 23:56:46.404291 containerd[2027]: time="2025-09-12T23:56:46.403391537Z" level=info msg="StartContainer for \"ee86f17855c9fb2ad491e84c8ec2766b9b868447c21186a796e7f90fdf9e5502\" returns successfully" Sep 12 23:56:46.406839 containerd[2027]: time="2025-09-12T23:56:46.406775789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 23:56:47.462058 containerd[2027]: time="2025-09-12T23:56:47.461903562Z" level=info msg="StopPodSandbox for \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\"" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.579 [INFO][4969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.579 [INFO][4969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" iface="eth0" netns="/var/run/netns/cni-06b35e92-29fd-6370-107a-9fb1301a5d4c" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.581 [INFO][4969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" iface="eth0" netns="/var/run/netns/cni-06b35e92-29fd-6370-107a-9fb1301a5d4c" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.581 [INFO][4969] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" iface="eth0" netns="/var/run/netns/cni-06b35e92-29fd-6370-107a-9fb1301a5d4c" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.581 [INFO][4969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.581 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.674 [INFO][4977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.675 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.675 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.691 [WARNING][4977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.691 [INFO][4977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.694 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:47.702611 containerd[2027]: 2025-09-12 23:56:47.697 [INFO][4969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:56:47.702611 containerd[2027]: time="2025-09-12T23:56:47.702376675Z" level=info msg="TearDown network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\" successfully" Sep 12 23:56:47.702611 containerd[2027]: time="2025-09-12T23:56:47.702417559Z" level=info msg="StopPodSandbox for \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\" returns successfully" Sep 12 23:56:47.706876 containerd[2027]: time="2025-09-12T23:56:47.705947695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6mb7r,Uid:e0c64f6a-d5cd-4c90-b7b6-77627d265c99,Namespace:calico-system,Attempt:1,}" Sep 12 23:56:47.712389 systemd[1]: run-netns-cni\x2d06b35e92\x2d29fd\x2d6370\x2d107a\x2d9fb1301a5d4c.mount: Deactivated successfully. Sep 12 23:56:48.019396 systemd-networkd[1851]: calidfd04e1f21b: Link UP Sep 12 23:56:48.021165 systemd-networkd[1851]: calidfd04e1f21b: Gained carrier Sep 12 23:56:48.036698 (udev-worker)[5021]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.818 [INFO][4989] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.849 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0 goldmane-54d579b49d- calico-system e0c64f6a-d5cd-4c90-b7b6-77627d265c99 947 0 2025-09-12 23:56:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-162 goldmane-54d579b49d-6mb7r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidfd04e1f21b [] [] }} ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.849 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.935 [INFO][5008] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" HandleID="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.936 [INFO][5008] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" HandleID="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032a240), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-162", "pod":"goldmane-54d579b49d-6mb7r", "timestamp":"2025-09-12 23:56:47.935354792 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.936 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.936 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.936 [INFO][5008] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.952 [INFO][5008] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.964 [INFO][5008] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.974 [INFO][5008] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.979 [INFO][5008] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.984 [INFO][5008] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.984 [INFO][5008] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.988 [INFO][5008] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077 Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:47.995 [INFO][5008] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:48.007 [INFO][5008] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.66/26] block=192.168.55.64/26 handle="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:48.007 [INFO][5008] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.66/26] handle="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" host="ip-172-31-20-162" Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:48.007 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:48.058322 containerd[2027]: 2025-09-12 23:56:48.007 [INFO][5008] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.66/26] IPv6=[] ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" HandleID="k8s-pod-network.30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:48.061232 containerd[2027]: 2025-09-12 23:56:48.013 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e0c64f6a-d5cd-4c90-b7b6-77627d265c99", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"goldmane-54d579b49d-6mb7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidfd04e1f21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:48.061232 containerd[2027]: 2025-09-12 23:56:48.013 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.66/32] ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:48.061232 containerd[2027]: 2025-09-12 23:56:48.013 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfd04e1f21b ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:48.061232 containerd[2027]: 2025-09-12 23:56:48.021 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:48.061232 containerd[2027]: 2025-09-12 23:56:48.023 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e0c64f6a-d5cd-4c90-b7b6-77627d265c99", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077", Pod:"goldmane-54d579b49d-6mb7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidfd04e1f21b", MAC:"16:8e:4f:a1:11:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:48.061232 containerd[2027]: 2025-09-12 23:56:48.049 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077" Namespace="calico-system" Pod="goldmane-54d579b49d-6mb7r" WorkloadEndpoint="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:56:48.117769 containerd[2027]: time="2025-09-12T23:56:48.114937385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:48.117769 containerd[2027]: time="2025-09-12T23:56:48.115040285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:48.117769 containerd[2027]: time="2025-09-12T23:56:48.115066589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:48.117769 containerd[2027]: time="2025-09-12T23:56:48.115958033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:48.169918 systemd[1]: Started cri-containerd-30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077.scope - libcontainer container 30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077. Sep 12 23:56:48.237967 containerd[2027]: time="2025-09-12T23:56:48.237808854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6mb7r,Uid:e0c64f6a-d5cd-4c90-b7b6-77627d265c99,Namespace:calico-system,Attempt:1,} returns sandbox id \"30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077\"" Sep 12 23:56:48.465255 containerd[2027]: time="2025-09-12T23:56:48.465089587Z" level=info msg="StopPodSandbox for \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\"" Sep 12 23:56:48.472366 containerd[2027]: time="2025-09-12T23:56:48.471120499Z" level=info msg="StopPodSandbox for \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\"" Sep 12 23:56:48.711448 systemd[1]: run-containerd-runc-k8s.io-30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077-runc.w3X3wt.mount: Deactivated successfully. Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.643 [INFO][5088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.646 [INFO][5088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" iface="eth0" netns="/var/run/netns/cni-ef4f7016-61c9-c22e-2a62-9dcd12cd6454" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.647 [INFO][5088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" iface="eth0" netns="/var/run/netns/cni-ef4f7016-61c9-c22e-2a62-9dcd12cd6454" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.648 [INFO][5088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" iface="eth0" netns="/var/run/netns/cni-ef4f7016-61c9-c22e-2a62-9dcd12cd6454" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.648 [INFO][5088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.648 [INFO][5088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.742 [INFO][5106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.743 [INFO][5106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.743 [INFO][5106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.764 [WARNING][5106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.764 [INFO][5106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.766 [INFO][5106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:48.779004 containerd[2027]: 2025-09-12 23:56:48.772 [INFO][5088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:48.788577 containerd[2027]: time="2025-09-12T23:56:48.785301249Z" level=info msg="TearDown network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\" successfully" Sep 12 23:56:48.788577 containerd[2027]: time="2025-09-12T23:56:48.785362353Z" level=info msg="StopPodSandbox for \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\" returns successfully" Sep 12 23:56:48.789806 containerd[2027]: time="2025-09-12T23:56:48.789535749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm44c,Uid:6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e,Namespace:kube-system,Attempt:1,}" Sep 12 23:56:48.790436 systemd[1]: run-netns-cni\x2def4f7016\x2d61c9\x2dc22e\x2d2a62\x2d9dcd12cd6454.mount: Deactivated successfully. Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.631 [INFO][5084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.632 [INFO][5084] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" iface="eth0" netns="/var/run/netns/cni-68f8035a-cc3c-82d2-9804-957aa86c1231" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.634 [INFO][5084] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" iface="eth0" netns="/var/run/netns/cni-68f8035a-cc3c-82d2-9804-957aa86c1231" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.635 [INFO][5084] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" iface="eth0" netns="/var/run/netns/cni-68f8035a-cc3c-82d2-9804-957aa86c1231" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.635 [INFO][5084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.635 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.751 [INFO][5101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.752 [INFO][5101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.767 [INFO][5101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.797 [WARNING][5101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.797 [INFO][5101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.809 [INFO][5101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:48.819149 containerd[2027]: 2025-09-12 23:56:48.812 [INFO][5084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:56:48.823695 containerd[2027]: time="2025-09-12T23:56:48.819393753Z" level=info msg="TearDown network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\" successfully" Sep 12 23:56:48.823695 containerd[2027]: time="2025-09-12T23:56:48.819434889Z" level=info msg="StopPodSandbox for \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\" returns successfully" Sep 12 23:56:48.823695 containerd[2027]: time="2025-09-12T23:56:48.821883201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zn6sm,Uid:f60d9f74-1f96-4c24-9612-25c7fd4febe8,Namespace:calico-system,Attempt:1,}" Sep 12 23:56:48.828177 systemd[1]: run-netns-cni\x2d68f8035a\x2dcc3c\x2d82d2\x2d9804\x2d957aa86c1231.mount: Deactivated successfully. Sep 12 23:56:49.339030 (udev-worker)[5028]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:49.342400 systemd-networkd[1851]: calia444d5d2f8e: Link UP Sep 12 23:56:49.343218 systemd-networkd[1851]: calia444d5d2f8e: Gained carrier Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:48.986 [INFO][5118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.031 [INFO][5118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0 coredns-668d6bf9bc- kube-system 6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e 955 0 2025-09-12 23:56:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-162 coredns-668d6bf9bc-vm44c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia444d5d2f8e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.032 [INFO][5118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.211 [INFO][5151] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" HandleID="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.215 [INFO][5151] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" HandleID="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cfd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-162", "pod":"coredns-668d6bf9bc-vm44c", "timestamp":"2025-09-12 23:56:49.210394135 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.215 [INFO][5151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.216 [INFO][5151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.216 [INFO][5151] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.252 [INFO][5151] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.272 [INFO][5151] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.280 [INFO][5151] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.285 [INFO][5151] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.295 [INFO][5151] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.295 [INFO][5151] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.301 [INFO][5151] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.313 [INFO][5151] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.325 [INFO][5151] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.67/26] block=192.168.55.64/26 handle="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.325 [INFO][5151] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.67/26] handle="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" host="ip-172-31-20-162" Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.326 [INFO][5151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:49.397970 containerd[2027]: 2025-09-12 23:56:49.326 [INFO][5151] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.67/26] IPv6=[] ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" HandleID="k8s-pod-network.59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:49.401979 containerd[2027]: 2025-09-12 23:56:49.333 [INFO][5118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"coredns-668d6bf9bc-vm44c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia444d5d2f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:49.401979 containerd[2027]: 2025-09-12 23:56:49.334 [INFO][5118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.67/32] ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:49.401979 containerd[2027]: 2025-09-12 23:56:49.334 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia444d5d2f8e ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:49.401979 containerd[2027]: 2025-09-12 23:56:49.346 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:49.401979 containerd[2027]: 2025-09-12 23:56:49.352 [INFO][5118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f", Pod:"coredns-668d6bf9bc-vm44c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia444d5d2f8e", MAC:"ca:e0:61:b6:4a:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:49.402439 containerd[2027]: 2025-09-12 23:56:49.392 [INFO][5118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vm44c" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:49.467573 containerd[2027]: time="2025-09-12T23:56:49.465082076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:49.467573 containerd[2027]: time="2025-09-12T23:56:49.465170324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:49.467573 containerd[2027]: time="2025-09-12T23:56:49.465195344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:49.467573 containerd[2027]: time="2025-09-12T23:56:49.465333884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:49.496775 systemd-networkd[1851]: calidfd04e1f21b: Gained IPv6LL Sep 12 23:56:49.517630 systemd-networkd[1851]: cali93221732749: Link UP Sep 12 23:56:49.518262 systemd-networkd[1851]: cali93221732749: Gained carrier Sep 12 23:56:49.574012 systemd[1]: Started cri-containerd-59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f.scope - libcontainer container 59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f. Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.006 [INFO][5127] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.055 [INFO][5127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0 csi-node-driver- calico-system f60d9f74-1f96-4c24-9612-25c7fd4febe8 954 0 2025-09-12 23:56:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-162 csi-node-driver-zn6sm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali93221732749 [] [] }} ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.055 [INFO][5127] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.233 [INFO][5156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" HandleID="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.234 [INFO][5156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" HandleID="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-162", "pod":"csi-node-driver-zn6sm", "timestamp":"2025-09-12 23:56:49.232750339 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.234 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.326 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.326 [INFO][5156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.366 [INFO][5156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.380 [INFO][5156] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.396 [INFO][5156] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.405 [INFO][5156] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.412 [INFO][5156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.412 [INFO][5156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.420 [INFO][5156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42 Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.440 [INFO][5156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.471 [INFO][5156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.68/26] block=192.168.55.64/26 handle="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.471 [INFO][5156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.68/26] handle="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" host="ip-172-31-20-162" Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.471 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:49.586691 containerd[2027]: 2025-09-12 23:56:49.471 [INFO][5156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.68/26] IPv6=[] ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" HandleID="k8s-pod-network.1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:49.589381 containerd[2027]: 2025-09-12 23:56:49.486 [INFO][5127] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f60d9f74-1f96-4c24-9612-25c7fd4febe8", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"csi-node-driver-zn6sm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93221732749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:49.589381 containerd[2027]: 2025-09-12 23:56:49.490 [INFO][5127] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.68/32] ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:49.589381 containerd[2027]: 2025-09-12 23:56:49.490 [INFO][5127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93221732749 ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:49.589381 containerd[2027]: 2025-09-12 23:56:49.521 [INFO][5127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:49.589381 containerd[2027]: 2025-09-12 23:56:49.529 [INFO][5127] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f60d9f74-1f96-4c24-9612-25c7fd4febe8", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42", Pod:"csi-node-driver-zn6sm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93221732749", MAC:"86:a4:85:d2:2a:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:49.589381 containerd[2027]: 2025-09-12 23:56:49.580 [INFO][5127] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42" Namespace="calico-system" Pod="csi-node-driver-zn6sm" WorkloadEndpoint="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:56:49.652082 containerd[2027]: time="2025-09-12T23:56:49.651826401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:49.653028 containerd[2027]: time="2025-09-12T23:56:49.652281909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:49.653028 containerd[2027]: time="2025-09-12T23:56:49.652389561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:49.668372 containerd[2027]: time="2025-09-12T23:56:49.659781429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:49.847940 systemd[1]: Started cri-containerd-1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42.scope - libcontainer container 1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42. Sep 12 23:56:49.898527 containerd[2027]: time="2025-09-12T23:56:49.897110614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm44c,Uid:6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e,Namespace:kube-system,Attempt:1,} returns sandbox id \"59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f\"" Sep 12 23:56:49.908204 containerd[2027]: time="2025-09-12T23:56:49.907836850Z" level=info msg="CreateContainer within sandbox \"59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:56:49.960535 containerd[2027]: time="2025-09-12T23:56:49.960483190Z" level=info msg="CreateContainer within sandbox \"59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5055c9c3315a8fbb14069b00870697e38e7d163920baf4e08791355c1cb4256\"" Sep 12 23:56:49.965476 containerd[2027]: time="2025-09-12T23:56:49.965122126Z" level=info msg="StartContainer for \"e5055c9c3315a8fbb14069b00870697e38e7d163920baf4e08791355c1cb4256\"" Sep 12 23:56:50.016482 containerd[2027]: time="2025-09-12T23:56:50.016336195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zn6sm,Uid:f60d9f74-1f96-4c24-9612-25c7fd4febe8,Namespace:calico-system,Attempt:1,} returns sandbox id \"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42\"" Sep 12 23:56:50.099903 systemd[1]: Started cri-containerd-e5055c9c3315a8fbb14069b00870697e38e7d163920baf4e08791355c1cb4256.scope - libcontainer container e5055c9c3315a8fbb14069b00870697e38e7d163920baf4e08791355c1cb4256. Sep 12 23:56:50.169492 containerd[2027]: time="2025-09-12T23:56:50.166755091Z" level=info msg="StartContainer for \"e5055c9c3315a8fbb14069b00870697e38e7d163920baf4e08791355c1cb4256\" returns successfully" Sep 12 23:56:50.520886 systemd-networkd[1851]: calia444d5d2f8e: Gained IPv6LL Sep 12 23:56:51.046640 kubelet[3403]: I0912 23:56:51.044071 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vm44c" podStartSLOduration=51.04392686 podStartE2EDuration="51.04392686s" podCreationTimestamp="2025-09-12 23:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:56:51.038918096 +0000 UTC m=+54.868522486" watchObservedRunningTime="2025-09-12 23:56:51.04392686 +0000 UTC m=+54.873531238" Sep 12 23:56:51.351989 systemd-networkd[1851]: cali93221732749: Gained IPv6LL Sep 12 23:56:51.416600 kubelet[3403]: I0912 23:56:51.415129 3403 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:56:51.463149 containerd[2027]: time="2025-09-12T23:56:51.463053778Z" level=info msg="StopPodSandbox for \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\"" Sep 12 23:56:51.466093 containerd[2027]: time="2025-09-12T23:56:51.465820474Z" level=info msg="StopPodSandbox for \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\"" Sep 12 23:56:51.710507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770000592.mount: Deactivated successfully. Sep 12 23:56:51.783001 containerd[2027]: time="2025-09-12T23:56:51.782323907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:51.791485 containerd[2027]: time="2025-09-12T23:56:51.789522527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 23:56:51.797804 containerd[2027]: time="2025-09-12T23:56:51.797735676Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:51.815612 containerd[2027]: time="2025-09-12T23:56:51.814693752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:51.821939 containerd[2027]: time="2025-09-12T23:56:51.821832336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 5.414988903s" Sep 12 23:56:51.822879 containerd[2027]: time="2025-09-12T23:56:51.822630012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 23:56:51.840997 containerd[2027]: time="2025-09-12T23:56:51.840895740Z" level=info msg="CreateContainer within sandbox \"835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.687 [INFO][5352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.687 [INFO][5352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" iface="eth0" netns="/var/run/netns/cni-30745d51-2be2-648b-e201-ad04c6c5bbd3" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.688 [INFO][5352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" iface="eth0" netns="/var/run/netns/cni-30745d51-2be2-648b-e201-ad04c6c5bbd3" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.690 [INFO][5352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" iface="eth0" netns="/var/run/netns/cni-30745d51-2be2-648b-e201-ad04c6c5bbd3" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.691 [INFO][5352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.691 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.778 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.780 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.781 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.812 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.812 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.828 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:51.860594 containerd[2027]: 2025-09-12 23:56:51.838 [INFO][5352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:51.868605 containerd[2027]: time="2025-09-12T23:56:51.865670052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 23:56:51.868605 containerd[2027]: time="2025-09-12T23:56:51.865942872Z" level=info msg="TearDown network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\" successfully" Sep 12 23:56:51.868605 containerd[2027]: time="2025-09-12T23:56:51.865977168Z" level=info msg="StopPodSandbox for \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\" returns successfully" Sep 12 23:56:51.871338 systemd[1]: run-netns-cni\x2d30745d51\x2d2be2\x2d648b\x2de201\x2dad04c6c5bbd3.mount: Deactivated successfully. Sep 12 23:56:51.876131 containerd[2027]: time="2025-09-12T23:56:51.876075024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-t5fsb,Uid:64bd2045-89ff-4c72-8c98-7616c71dee49,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:56:51.881580 containerd[2027]: time="2025-09-12T23:56:51.881393892Z" level=info msg="CreateContainer within sandbox \"835e6209185d4ebbd1512a8f94f41385105d122b130bd553b2ad9bc0bf31f8d7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c10a5cbae54accd2916f3bac7691d702425ee01a905b1f0bc968e68d0fab89ee\"" Sep 12 23:56:51.890633 containerd[2027]: time="2025-09-12T23:56:51.890175876Z" level=info msg="StartContainer for \"c10a5cbae54accd2916f3bac7691d702425ee01a905b1f0bc968e68d0fab89ee\"" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.658 [INFO][5351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.660 [INFO][5351] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" iface="eth0" netns="/var/run/netns/cni-ef84db49-5e24-ca78-1d40-261f857af3e0" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.660 [INFO][5351] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" iface="eth0" netns="/var/run/netns/cni-ef84db49-5e24-ca78-1d40-261f857af3e0" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.662 [INFO][5351] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" iface="eth0" netns="/var/run/netns/cni-ef84db49-5e24-ca78-1d40-261f857af3e0" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.663 [INFO][5351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.663 [INFO][5351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.802 [INFO][5367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.802 [INFO][5367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.831 [INFO][5367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.910 [WARNING][5367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.910 [INFO][5367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.918 [INFO][5367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:51.991937 containerd[2027]: 2025-09-12 23:56:51.944 [INFO][5351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:56:51.998131 containerd[2027]: time="2025-09-12T23:56:51.995891268Z" level=info msg="TearDown network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\" successfully" Sep 12 23:56:51.998131 containerd[2027]: time="2025-09-12T23:56:51.995945232Z" level=info msg="StopPodSandbox for \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\" returns successfully" Sep 12 23:56:52.007659 systemd[1]: run-netns-cni\x2def84db49\x2d5e24\x2dca78\x2d1d40\x2d261f857af3e0.mount: Deactivated successfully. Sep 12 23:56:52.014443 containerd[2027]: time="2025-09-12T23:56:52.013306737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpl6r,Uid:c1a3fe73-b569-44ae-8604-d1dae69d130b,Namespace:kube-system,Attempt:1,}" Sep 12 23:56:52.042519 systemd[1]: Started cri-containerd-c10a5cbae54accd2916f3bac7691d702425ee01a905b1f0bc968e68d0fab89ee.scope - libcontainer container c10a5cbae54accd2916f3bac7691d702425ee01a905b1f0bc968e68d0fab89ee. Sep 12 23:56:52.473994 containerd[2027]: time="2025-09-12T23:56:52.473911667Z" level=info msg="StopPodSandbox for \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\"" Sep 12 23:56:52.482985 containerd[2027]: time="2025-09-12T23:56:52.482293739Z" level=info msg="StopPodSandbox for \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\"" Sep 12 23:56:52.847172 systemd[1]: Started sshd@7-172.31.20.162:22-147.75.109.163:48758.service - OpenSSH per-connection server daemon (147.75.109.163:48758). Sep 12 23:56:52.970082 systemd-networkd[1851]: cali989d207671d: Link UP Sep 12 23:56:52.981051 (udev-worker)[5517]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:52.984691 systemd-networkd[1851]: cali989d207671d: Gained carrier Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.226 [INFO][5411] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.271 [INFO][5411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0 calico-apiserver-88f4475- calico-apiserver 64bd2045-89ff-4c72-8c98-7616c71dee49 990 0 2025-09-12 23:56:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88f4475 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-162 calico-apiserver-88f4475-t5fsb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali989d207671d [] [] }} ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.278 [INFO][5411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.447 [INFO][5452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" HandleID="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.448 [INFO][5452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" HandleID="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031b870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-162", "pod":"calico-apiserver-88f4475-t5fsb", "timestamp":"2025-09-12 23:56:52.446689547 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.448 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.448 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.449 [INFO][5452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.499 [INFO][5452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.542 [INFO][5452] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.622 [INFO][5452] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.676 [INFO][5452] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.737 [INFO][5452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.737 [INFO][5452] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.769 [INFO][5452] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33 Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.817 [INFO][5452] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.913 [INFO][5452] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.69/26] block=192.168.55.64/26 handle="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.913 [INFO][5452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.69/26] handle="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" host="ip-172-31-20-162" Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.913 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:53.087662 containerd[2027]: 2025-09-12 23:56:52.913 [INFO][5452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.69/26] IPv6=[] ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" HandleID="k8s-pod-network.36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:53.092864 containerd[2027]: 2025-09-12 23:56:52.939 [INFO][5411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bd2045-89ff-4c72-8c98-7616c71dee49", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"calico-apiserver-88f4475-t5fsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali989d207671d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:53.092864 containerd[2027]: 2025-09-12 23:56:52.943 [INFO][5411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.69/32] ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:53.092864 containerd[2027]: 2025-09-12 23:56:52.943 [INFO][5411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali989d207671d ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:53.092864 containerd[2027]: 2025-09-12 23:56:53.003 [INFO][5411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:53.092864 containerd[2027]: 2025-09-12 23:56:53.003 [INFO][5411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bd2045-89ff-4c72-8c98-7616c71dee49", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33", Pod:"calico-apiserver-88f4475-t5fsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali989d207671d", MAC:"1a:c2:07:4e:79:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:53.092864 containerd[2027]: 2025-09-12 23:56:53.068 [INFO][5411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-t5fsb" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:53.113017 sshd[5510]: Accepted publickey for core from 147.75.109.163 port 48758 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:53.123664 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:53.152940 systemd-logind[1997]: New session 8 of user core. Sep 12 23:56:53.166976 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:56:53.367259 systemd-networkd[1851]: caliddf4aefc9ee: Link UP Sep 12 23:56:53.393917 systemd-networkd[1851]: caliddf4aefc9ee: Gained carrier Sep 12 23:56:53.467329 containerd[2027]: time="2025-09-12T23:56:53.467192964Z" level=info msg="StartContainer for \"c10a5cbae54accd2916f3bac7691d702425ee01a905b1f0bc968e68d0fab89ee\" returns successfully" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:52.823 [INFO][5488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:52.824 [INFO][5488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" iface="eth0" netns="/var/run/netns/cni-05368ba0-8f75-ab2c-6645-89830f19a85b" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:52.826 [INFO][5488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" iface="eth0" netns="/var/run/netns/cni-05368ba0-8f75-ab2c-6645-89830f19a85b" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:52.838 [INFO][5488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" iface="eth0" netns="/var/run/netns/cni-05368ba0-8f75-ab2c-6645-89830f19a85b" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:52.838 [INFO][5488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:52.838 [INFO][5488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:53.199 [INFO][5511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:53.202 [INFO][5511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:53.322 [INFO][5511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:53.396 [WARNING][5511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:53.396 [INFO][5511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:53.419 [INFO][5511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:53.496798 containerd[2027]: 2025-09-12 23:56:53.469 [INFO][5488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:56:53.505252 containerd[2027]: time="2025-09-12T23:56:53.504918096Z" level=info msg="TearDown network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\" successfully" Sep 12 23:56:53.508489 systemd[1]: run-netns-cni\x2d05368ba0\x2d8f75\x2dab2c\x2d6645\x2d89830f19a85b.mount: Deactivated successfully. Sep 12 23:56:53.512499 containerd[2027]: time="2025-09-12T23:56:53.508966884Z" level=info msg="StopPodSandbox for \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\" returns successfully" Sep 12 23:56:53.523292 containerd[2027]: time="2025-09-12T23:56:53.522246912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:53.523292 containerd[2027]: time="2025-09-12T23:56:53.522359100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:53.523292 containerd[2027]: time="2025-09-12T23:56:53.522408504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:53.525185 containerd[2027]: time="2025-09-12T23:56:53.524321412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:53.532582 containerd[2027]: time="2025-09-12T23:56:53.532438308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-4dq8d,Uid:eeb53fe4-c09c-4147-b78d-113cabffc7ee,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.238 [INFO][5428] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.293 [INFO][5428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0 coredns-668d6bf9bc- kube-system c1a3fe73-b569-44ae-8604-d1dae69d130b 989 0 2025-09-12 23:56:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-162 coredns-668d6bf9bc-xpl6r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliddf4aefc9ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.293 [INFO][5428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.455 [INFO][5453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" HandleID="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.455 [INFO][5453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" HandleID="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003331a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-162", "pod":"coredns-668d6bf9bc-xpl6r", "timestamp":"2025-09-12 23:56:52.452933579 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.456 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.921 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:52.921 [INFO][5453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.046 [INFO][5453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.094 [INFO][5453] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.163 [INFO][5453] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.176 [INFO][5453] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.194 [INFO][5453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.194 [INFO][5453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.206 [INFO][5453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8 Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.253 [INFO][5453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.321 [INFO][5453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.70/26] block=192.168.55.64/26 handle="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.322 [INFO][5453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.70/26] handle="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" host="ip-172-31-20-162" Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.322 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:53.533275 containerd[2027]: 2025-09-12 23:56:53.324 [INFO][5453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.70/26] IPv6=[] ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" HandleID="k8s-pod-network.b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:53.534467 containerd[2027]: 2025-09-12 23:56:53.336 [INFO][5428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c1a3fe73-b569-44ae-8604-d1dae69d130b", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"coredns-668d6bf9bc-xpl6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf4aefc9ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:53.534467 containerd[2027]: 2025-09-12 23:56:53.338 [INFO][5428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.70/32] ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:53.534467 containerd[2027]: 2025-09-12 23:56:53.339 [INFO][5428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddf4aefc9ee ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:53.534467 containerd[2027]: 2025-09-12 23:56:53.399 [INFO][5428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:53.534467 containerd[2027]: 2025-09-12 23:56:53.405 [INFO][5428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c1a3fe73-b569-44ae-8604-d1dae69d130b", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8", Pod:"coredns-668d6bf9bc-xpl6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf4aefc9ee", MAC:"76:1e:8d:03:54:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:53.537481 containerd[2027]: 2025-09-12 23:56:53.485 [INFO][5428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8" Namespace="kube-system" Pod="coredns-668d6bf9bc-xpl6r" WorkloadEndpoint="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:52.806 [INFO][5490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:52.807 [INFO][5490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" iface="eth0" netns="/var/run/netns/cni-421fc771-80f8-f1c7-8999-eb2beae7b31f" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:52.807 [INFO][5490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" iface="eth0" netns="/var/run/netns/cni-421fc771-80f8-f1c7-8999-eb2beae7b31f" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:52.808 [INFO][5490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" iface="eth0" netns="/var/run/netns/cni-421fc771-80f8-f1c7-8999-eb2beae7b31f" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:52.808 [INFO][5490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:52.809 [INFO][5490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:53.271 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:53.284 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:53.433 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:53.543 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:53.544 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:53.559 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:53.610207 containerd[2027]: 2025-09-12 23:56:53.583 [INFO][5490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:53.619653 containerd[2027]: time="2025-09-12T23:56:53.619089925Z" level=info msg="TearDown network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\" successfully" Sep 12 23:56:53.619653 containerd[2027]: time="2025-09-12T23:56:53.619150825Z" level=info msg="StopPodSandbox for \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\" returns successfully" Sep 12 23:56:53.620060 systemd[1]: run-netns-cni\x2d421fc771\x2d80f8\x2df1c7\x2d8999\x2deb2beae7b31f.mount: Deactivated successfully. Sep 12 23:56:53.627737 containerd[2027]: time="2025-09-12T23:56:53.626325733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bc58cd45c-f245z,Uid:cdcda89e-ca4a-4a78-8c84-d924c20fa355,Namespace:calico-system,Attempt:1,}" Sep 12 23:56:53.730851 systemd[1]: Started cri-containerd-36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33.scope - libcontainer container 36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33. Sep 12 23:56:53.796889 sshd[5510]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:53.814192 systemd[1]: sshd@7-172.31.20.162:22-147.75.109.163:48758.service: Deactivated successfully. Sep 12 23:56:53.824991 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:56:53.827456 systemd-logind[1997]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:56:53.830738 systemd-logind[1997]: Removed session 8. Sep 12 23:56:53.911833 containerd[2027]: time="2025-09-12T23:56:53.863607806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:53.911833 containerd[2027]: time="2025-09-12T23:56:53.863715014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:53.911833 containerd[2027]: time="2025-09-12T23:56:53.863752334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:53.911833 containerd[2027]: time="2025-09-12T23:56:53.863904614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:54.121124 systemd[1]: Started cri-containerd-b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8.scope - libcontainer container b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8. Sep 12 23:56:54.577484 containerd[2027]: time="2025-09-12T23:56:54.576746761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xpl6r,Uid:c1a3fe73-b569-44ae-8604-d1dae69d130b,Namespace:kube-system,Attempt:1,} returns sandbox id \"b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8\"" Sep 12 23:56:54.589887 containerd[2027]: time="2025-09-12T23:56:54.589415701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-t5fsb,Uid:64bd2045-89ff-4c72-8c98-7616c71dee49,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33\"" Sep 12 23:56:54.606147 containerd[2027]: time="2025-09-12T23:56:54.605886289Z" level=info msg="CreateContainer within sandbox \"b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:56:54.751842 containerd[2027]: time="2025-09-12T23:56:54.751668266Z" level=info msg="CreateContainer within sandbox \"b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3864a443c0aab7b35bc9a9326ff251da9e91393cd354ce0f1d7a01e2e0d79d07\"" Sep 12 23:56:54.756022 containerd[2027]: time="2025-09-12T23:56:54.755851310Z" level=info msg="StartContainer for \"3864a443c0aab7b35bc9a9326ff251da9e91393cd354ce0f1d7a01e2e0d79d07\"" Sep 12 23:56:54.818403 systemd-networkd[1851]: califb663e36f9b: Link UP Sep 12 23:56:54.831623 systemd-networkd[1851]: califb663e36f9b: Gained carrier Sep 12 23:56:54.871919 systemd-networkd[1851]: caliddf4aefc9ee: Gained IPv6LL Sep 12 23:56:54.897268 kubelet[3403]: I0912 23:56:54.897142 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cfd7d6fcf-p4xpc" podStartSLOduration=3.940631084 podStartE2EDuration="10.897117831s" podCreationTimestamp="2025-09-12 23:56:44 +0000 UTC" firstStartedPulling="2025-09-12 23:56:44.880107461 +0000 UTC m=+48.709711839" lastFinishedPulling="2025-09-12 23:56:51.83659422 +0000 UTC m=+55.666198586" observedRunningTime="2025-09-12 23:56:54.260957904 +0000 UTC m=+58.090562426" watchObservedRunningTime="2025-09-12 23:56:54.897117831 +0000 UTC m=+58.726722209" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.087 [INFO][5592] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.187 [INFO][5592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0 calico-apiserver-88f4475- calico-apiserver eeb53fe4-c09c-4147-b78d-113cabffc7ee 1021 0 2025-09-12 23:56:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:88f4475 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-162 calico-apiserver-88f4475-4dq8d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb663e36f9b [] [] }} ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.189 [INFO][5592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.540 [INFO][5655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" HandleID="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.541 [INFO][5655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" HandleID="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001211a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-162", "pod":"calico-apiserver-88f4475-4dq8d", "timestamp":"2025-09-12 23:56:54.540305881 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.541 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.542 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.543 [INFO][5655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.633 [INFO][5655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.665 [INFO][5655] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.691 [INFO][5655] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.700 [INFO][5655] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.724 [INFO][5655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.724 [INFO][5655] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.736 [INFO][5655] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3 Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.761 [INFO][5655] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.793 [INFO][5655] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.71/26] block=192.168.55.64/26 handle="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.794 [INFO][5655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.71/26] handle="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" host="ip-172-31-20-162" Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.794 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:54.927873 containerd[2027]: 2025-09-12 23:56:54.794 [INFO][5655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.71/26] IPv6=[] ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" HandleID="k8s-pod-network.71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:54.931894 containerd[2027]: 2025-09-12 23:56:54.802 [INFO][5592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeb53fe4-c09c-4147-b78d-113cabffc7ee", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"calico-apiserver-88f4475-4dq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb663e36f9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:54.931894 containerd[2027]: 2025-09-12 23:56:54.803 [INFO][5592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.71/32] ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:54.931894 containerd[2027]: 2025-09-12 23:56:54.804 [INFO][5592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb663e36f9b ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:54.931894 containerd[2027]: 2025-09-12 23:56:54.843 [INFO][5592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:54.931894 containerd[2027]: 2025-09-12 23:56:54.850 [INFO][5592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeb53fe4-c09c-4147-b78d-113cabffc7ee", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3", Pod:"calico-apiserver-88f4475-4dq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb663e36f9b", MAC:"ba:66:9e:33:a4:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:54.931894 containerd[2027]: 2025-09-12 23:56:54.911 [INFO][5592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3" Namespace="calico-apiserver" Pod="calico-apiserver-88f4475-4dq8d" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:56:54.985215 systemd[1]: Started cri-containerd-3864a443c0aab7b35bc9a9326ff251da9e91393cd354ce0f1d7a01e2e0d79d07.scope - libcontainer container 3864a443c0aab7b35bc9a9326ff251da9e91393cd354ce0f1d7a01e2e0d79d07. Sep 12 23:56:55.000812 systemd-networkd[1851]: cali989d207671d: Gained IPv6LL Sep 12 23:56:55.082295 systemd-networkd[1851]: cali0b8938994c7: Link UP Sep 12 23:56:55.090816 systemd-networkd[1851]: cali0b8938994c7: Gained carrier Sep 12 23:56:55.144725 containerd[2027]: time="2025-09-12T23:56:55.143834592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:55.144725 containerd[2027]: time="2025-09-12T23:56:55.143931336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:55.145100 containerd[2027]: time="2025-09-12T23:56:55.144844668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:55.145779 containerd[2027]: time="2025-09-12T23:56:55.145564824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:55.195944 containerd[2027]: time="2025-09-12T23:56:55.195025464Z" level=info msg="StartContainer for \"3864a443c0aab7b35bc9a9326ff251da9e91393cd354ce0f1d7a01e2e0d79d07\" returns successfully" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.345 [INFO][5631] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.488 [INFO][5631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0 calico-kube-controllers-5bc58cd45c- calico-system cdcda89e-ca4a-4a78-8c84-d924c20fa355 1017 0 2025-09-12 23:56:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bc58cd45c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-162 calico-kube-controllers-5bc58cd45c-f245z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0b8938994c7 [] [] }} ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.488 [INFO][5631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.759 [INFO][5686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" HandleID="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.759 [INFO][5686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" HandleID="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f4270), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-162", "pod":"calico-kube-controllers-5bc58cd45c-f245z", "timestamp":"2025-09-12 23:56:54.759471194 +0000 UTC"}, Hostname:"ip-172-31-20-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.759 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.795 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.796 [INFO][5686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-162' Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.847 [INFO][5686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.875 [INFO][5686] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.912 [INFO][5686] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.922 [INFO][5686] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.932 [INFO][5686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.932 [INFO][5686] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.945 [INFO][5686] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73 Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:54.979 [INFO][5686] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:55.025 [INFO][5686] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.72/26] block=192.168.55.64/26 handle="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:55.027 [INFO][5686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.72/26] handle="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" host="ip-172-31-20-162" Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:55.027 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:55.210866 containerd[2027]: 2025-09-12 23:56:55.027 [INFO][5686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.72/26] IPv6=[] ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" HandleID="k8s-pod-network.768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:55.213894 containerd[2027]: 2025-09-12 23:56:55.059 [INFO][5631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0", GenerateName:"calico-kube-controllers-5bc58cd45c-", Namespace:"calico-system", SelfLink:"", UID:"cdcda89e-ca4a-4a78-8c84-d924c20fa355", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bc58cd45c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"", Pod:"calico-kube-controllers-5bc58cd45c-f245z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b8938994c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:55.213894 containerd[2027]: 2025-09-12 23:56:55.059 [INFO][5631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.72/32] ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:55.213894 containerd[2027]: 2025-09-12 23:56:55.059 [INFO][5631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b8938994c7 ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:55.213894 containerd[2027]: 2025-09-12 23:56:55.105 [INFO][5631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:55.213894 containerd[2027]: 2025-09-12 23:56:55.125 [INFO][5631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0", GenerateName:"calico-kube-controllers-5bc58cd45c-", Namespace:"calico-system", SelfLink:"", UID:"cdcda89e-ca4a-4a78-8c84-d924c20fa355", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bc58cd45c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73", Pod:"calico-kube-controllers-5bc58cd45c-f245z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b8938994c7", MAC:"36:b3:34:7d:e1:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:55.213894 containerd[2027]: 2025-09-12 23:56:55.182 [INFO][5631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73" Namespace="calico-system" Pod="calico-kube-controllers-5bc58cd45c-f245z" WorkloadEndpoint="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:55.273882 systemd[1]: Started cri-containerd-71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3.scope - libcontainer container 71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3. Sep 12 23:56:55.349132 containerd[2027]: time="2025-09-12T23:56:55.348377905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:55.349132 containerd[2027]: time="2025-09-12T23:56:55.348507313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:55.349132 containerd[2027]: time="2025-09-12T23:56:55.348576517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:55.349132 containerd[2027]: time="2025-09-12T23:56:55.348746689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:55.416897 systemd[1]: Started cri-containerd-768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73.scope - libcontainer container 768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73. Sep 12 23:56:55.981834 containerd[2027]: time="2025-09-12T23:56:55.981314824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bc58cd45c-f245z,Uid:cdcda89e-ca4a-4a78-8c84-d924c20fa355,Namespace:calico-system,Attempt:1,} returns sandbox id \"768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73\"" Sep 12 23:56:56.003132 containerd[2027]: time="2025-09-12T23:56:56.003057300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-88f4475-4dq8d,Uid:eeb53fe4-c09c-4147-b78d-113cabffc7ee,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3\"" Sep 12 23:56:56.215959 systemd-networkd[1851]: califb663e36f9b: Gained IPv6LL Sep 12 23:56:56.309332 kubelet[3403]: I0912 23:56:56.309174 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xpl6r" podStartSLOduration=56.309145754 podStartE2EDuration="56.309145754s" podCreationTimestamp="2025-09-12 23:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:56:55.292827565 +0000 UTC m=+59.122431967" watchObservedRunningTime="2025-09-12 23:56:56.309145754 +0000 UTC m=+60.138750132" Sep 12 23:56:56.498657 containerd[2027]: time="2025-09-12T23:56:56.498478695Z" level=info msg="StopPodSandbox for \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\"" Sep 12 23:56:56.728306 systemd-networkd[1851]: cali0b8938994c7: Gained IPv6LL Sep 12 23:56:56.924754 kernel: bpftool[5894]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.679 [WARNING][5869] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f", Pod:"coredns-668d6bf9bc-vm44c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia444d5d2f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.685 [INFO][5869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.689 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" iface="eth0" netns="" Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.689 [INFO][5869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.689 [INFO][5869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.836 [INFO][5878] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.836 [INFO][5878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.837 [INFO][5878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.903 [WARNING][5878] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.904 [INFO][5878] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.913 [INFO][5878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:56.931455 containerd[2027]: 2025-09-12 23:56:56.919 [INFO][5869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:56.934066 containerd[2027]: time="2025-09-12T23:56:56.931488101Z" level=info msg="TearDown network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\" successfully" Sep 12 23:56:56.934066 containerd[2027]: time="2025-09-12T23:56:56.931530665Z" level=info msg="StopPodSandbox for \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\" returns successfully" Sep 12 23:56:56.935493 containerd[2027]: time="2025-09-12T23:56:56.934989449Z" level=info msg="RemovePodSandbox for \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\"" Sep 12 23:56:56.938037 containerd[2027]: time="2025-09-12T23:56:56.937763585Z" level=info msg="Forcibly stopping sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\"" Sep 12 23:56:57.282020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707271311.mount: Deactivated successfully. Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.105 [WARNING][5903] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbd52fd-63e0-48d0-bb70-dd7ef684ad7e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"59f2c9d8650e4470166d4731fb6b8c0bad9b8f3d57176165f7f42dc5e44e060f", Pod:"coredns-668d6bf9bc-vm44c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia444d5d2f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.105 [INFO][5903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.105 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" iface="eth0" netns="" Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.105 [INFO][5903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.110 [INFO][5903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.298 [INFO][5912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.305 [INFO][5912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.306 [INFO][5912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.358 [WARNING][5912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.358 [INFO][5912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" HandleID="k8s-pod-network.bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--vm44c-eth0" Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.379 [INFO][5912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:57.394072 containerd[2027]: 2025-09-12 23:56:57.384 [INFO][5903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965" Sep 12 23:56:57.398187 containerd[2027]: time="2025-09-12T23:56:57.394124223Z" level=info msg="TearDown network for sandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\" successfully" Sep 12 23:56:57.404072 containerd[2027]: time="2025-09-12T23:56:57.403736547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:57.404072 containerd[2027]: time="2025-09-12T23:56:57.403838343Z" level=info msg="RemovePodSandbox \"bd750cd7e7c081674392ef8b04db26d4b65a61ba0f78a4347b99211262dab965\" returns successfully" Sep 12 23:56:57.406595 containerd[2027]: time="2025-09-12T23:56:57.406219095Z" level=info msg="StopPodSandbox for \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\"" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.607 [WARNING][5934] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bd2045-89ff-4c72-8c98-7616c71dee49", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33", Pod:"calico-apiserver-88f4475-t5fsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali989d207671d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.611 [INFO][5934] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.611 [INFO][5934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" iface="eth0" netns="" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.611 [INFO][5934] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.611 [INFO][5934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.713 [INFO][5942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.715 [INFO][5942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.716 [INFO][5942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.737 [WARNING][5942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.737 [INFO][5942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.745 [INFO][5942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:57.758972 containerd[2027]: 2025-09-12 23:56:57.751 [INFO][5934] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:57.758972 containerd[2027]: time="2025-09-12T23:56:57.758782913Z" level=info msg="TearDown network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\" successfully" Sep 12 23:56:57.758972 containerd[2027]: time="2025-09-12T23:56:57.758937965Z" level=info msg="StopPodSandbox for \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\" returns successfully" Sep 12 23:56:57.762689 containerd[2027]: time="2025-09-12T23:56:57.761344421Z" level=info msg="RemovePodSandbox for \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\"" Sep 12 23:56:57.762689 containerd[2027]: time="2025-09-12T23:56:57.761393441Z" level=info msg="Forcibly stopping sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\"" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:57.927 [WARNING][5956] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"64bd2045-89ff-4c72-8c98-7616c71dee49", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33", Pod:"calico-apiserver-88f4475-t5fsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali989d207671d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:57.928 [INFO][5956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:57.928 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" iface="eth0" netns="" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:57.929 [INFO][5956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:57.929 [INFO][5956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:58.042 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:58.043 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:58.044 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:58.069 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:58.073 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" HandleID="k8s-pod-network.b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--t5fsb-eth0" Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:58.080 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:58.093975 containerd[2027]: 2025-09-12 23:56:58.085 [INFO][5956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65" Sep 12 23:56:58.096991 containerd[2027]: time="2025-09-12T23:56:58.094046103Z" level=info msg="TearDown network for sandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\" successfully" Sep 12 23:56:58.102638 containerd[2027]: time="2025-09-12T23:56:58.102116055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:58.103258 containerd[2027]: time="2025-09-12T23:56:58.102911355Z" level=info msg="RemovePodSandbox \"b9f6918b2b1cf0cc00db9899a7bf801a0a54eb5364dcd95ca1687bc89f092b65\" returns successfully" Sep 12 23:56:58.105009 containerd[2027]: time="2025-09-12T23:56:58.104940879Z" level=info msg="StopPodSandbox for \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\"" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.324 [WARNING][5991] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.324 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.324 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" iface="eth0" netns="" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.324 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.324 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.461 [INFO][6001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.464 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.464 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.500 [WARNING][6001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.500 [INFO][6001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.511 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:58.541517 containerd[2027]: 2025-09-12 23:56:58.524 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:58.545498 containerd[2027]: time="2025-09-12T23:56:58.541608293Z" level=info msg="TearDown network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\" successfully" Sep 12 23:56:58.545498 containerd[2027]: time="2025-09-12T23:56:58.541789649Z" level=info msg="StopPodSandbox for \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\" returns successfully" Sep 12 23:56:58.547969 containerd[2027]: time="2025-09-12T23:56:58.547915541Z" level=info msg="RemovePodSandbox for \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\"" Sep 12 23:56:58.547969 containerd[2027]: time="2025-09-12T23:56:58.548021237Z" level=info msg="Forcibly stopping sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\"" Sep 12 23:56:58.634398 systemd-networkd[1851]: vxlan.calico: Link UP Sep 12 23:56:58.634414 systemd-networkd[1851]: vxlan.calico: Gained carrier Sep 12 23:56:58.776982 (udev-worker)[5522]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:58.847282 systemd[1]: Started sshd@8-172.31.20.162:22-147.75.109.163:48768.service - OpenSSH per-connection server daemon (147.75.109.163:48768). Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:58.864 [WARNING][6015] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" WorkloadEndpoint="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:58.864 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:58.864 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" iface="eth0" netns="" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:58.864 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:58.864 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:59.023 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:59.024 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:59.024 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:59.057 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:59.057 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" HandleID="k8s-pod-network.d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Workload="ip--172--31--20--162-k8s-whisker--b884ffb96--z8ck8-eth0" Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:59.062 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:59.077210 containerd[2027]: 2025-09-12 23:56:59.069 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6" Sep 12 23:56:59.079520 containerd[2027]: time="2025-09-12T23:56:59.078758716Z" level=info msg="TearDown network for sandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\" successfully" Sep 12 23:56:59.089915 containerd[2027]: time="2025-09-12T23:56:59.089671468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:59.089915 containerd[2027]: time="2025-09-12T23:56:59.089770708Z" level=info msg="RemovePodSandbox \"d7d6f96815cdcf341b2402c9644ce0ea62184742affa73becc33bc3bba4072d6\" returns successfully" Sep 12 23:56:59.094224 containerd[2027]: time="2025-09-12T23:56:59.093314344Z" level=info msg="StopPodSandbox for \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\"" Sep 12 23:56:59.111777 sshd[6043]: Accepted publickey for core from 147.75.109.163 port 48768 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:59.115468 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:59.137675 systemd-logind[1997]: New session 9 of user core. Sep 12 23:56:59.145900 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.226 [WARNING][6060] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0", GenerateName:"calico-kube-controllers-5bc58cd45c-", Namespace:"calico-system", SelfLink:"", UID:"cdcda89e-ca4a-4a78-8c84-d924c20fa355", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bc58cd45c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73", Pod:"calico-kube-controllers-5bc58cd45c-f245z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b8938994c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.226 [INFO][6060] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.226 [INFO][6060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" iface="eth0" netns="" Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.226 [INFO][6060] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.226 [INFO][6060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.387 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.387 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.388 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.442 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.443 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.449 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:59.472011 containerd[2027]: 2025-09-12 23:56:59.460 [INFO][6060] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.472011 containerd[2027]: time="2025-09-12T23:56:59.471738714Z" level=info msg="TearDown network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\" successfully" Sep 12 23:56:59.472011 containerd[2027]: time="2025-09-12T23:56:59.471790650Z" level=info msg="StopPodSandbox for \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\" returns successfully" Sep 12 23:56:59.477467 containerd[2027]: time="2025-09-12T23:56:59.475253298Z" level=info msg="RemovePodSandbox for \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\"" Sep 12 23:56:59.477467 containerd[2027]: time="2025-09-12T23:56:59.475317306Z" level=info msg="Forcibly stopping sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\"" Sep 12 23:56:59.592666 sshd[6043]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:59.600824 systemd[1]: sshd@8-172.31.20.162:22-147.75.109.163:48768.service: Deactivated successfully. Sep 12 23:56:59.606534 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:56:59.618062 systemd-logind[1997]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:56:59.622668 systemd-logind[1997]: Removed session 9. Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.680 [WARNING][6096] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0", GenerateName:"calico-kube-controllers-5bc58cd45c-", Namespace:"calico-system", SelfLink:"", UID:"cdcda89e-ca4a-4a78-8c84-d924c20fa355", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bc58cd45c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73", Pod:"calico-kube-controllers-5bc58cd45c-f245z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b8938994c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.687 [INFO][6096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.687 [INFO][6096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" iface="eth0" netns="" Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.687 [INFO][6096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.687 [INFO][6096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.865 [INFO][6109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.868 [INFO][6109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.869 [INFO][6109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.890 [WARNING][6109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.890 [INFO][6109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" HandleID="k8s-pod-network.cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Workload="ip--172--31--20--162-k8s-calico--kube--controllers--5bc58cd45c--f245z-eth0" Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.895 [INFO][6109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:59.906955 containerd[2027]: 2025-09-12 23:56:59.901 [INFO][6096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7" Sep 12 23:56:59.910084 containerd[2027]: time="2025-09-12T23:56:59.907138772Z" level=info msg="TearDown network for sandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\" successfully" Sep 12 23:56:59.919653 containerd[2027]: time="2025-09-12T23:56:59.918676148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:59.919653 containerd[2027]: time="2025-09-12T23:56:59.918856892Z" level=info msg="RemovePodSandbox \"cd3aa7c4851acd0f2770e956162ff5739b781834f692b9c96a2bfa7a5ec5fda7\" returns successfully" Sep 12 23:56:59.919865 containerd[2027]: time="2025-09-12T23:56:59.919677644Z" level=info msg="StopPodSandbox for \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\"" Sep 12 23:56:59.927698 containerd[2027]: time="2025-09-12T23:56:59.927616688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 23:56:59.927860 containerd[2027]: time="2025-09-12T23:56:59.927799016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:59.938400 containerd[2027]: time="2025-09-12T23:56:59.936989972Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:59.968960 containerd[2027]: time="2025-09-12T23:56:59.968326172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:59.973504 containerd[2027]: time="2025-09-12T23:56:59.972777680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 8.1070261s" Sep 12 23:56:59.973504 containerd[2027]: time="2025-09-12T23:56:59.972854300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 23:56:59.980363 containerd[2027]: time="2025-09-12T23:56:59.980067452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:56:59.984389 containerd[2027]: time="2025-09-12T23:56:59.982992596Z" level=info msg="CreateContainer within sandbox \"30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 23:57:00.064124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3643801395.mount: Deactivated successfully. Sep 12 23:57:00.076610 containerd[2027]: time="2025-09-12T23:57:00.075590021Z" level=info msg="CreateContainer within sandbox \"30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"298015a5d6fa0191698699b49861fe38996492075716e50856aed7808d7badcc\"" Sep 12 23:57:00.079582 containerd[2027]: time="2025-09-12T23:57:00.077041757Z" level=info msg="StartContainer for \"298015a5d6fa0191698699b49861fe38996492075716e50856aed7808d7badcc\"" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.092 [WARNING][6150] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e0c64f6a-d5cd-4c90-b7b6-77627d265c99", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077", Pod:"goldmane-54d579b49d-6mb7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidfd04e1f21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.093 [INFO][6150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.093 [INFO][6150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" iface="eth0" netns="" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.093 [INFO][6150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.093 [INFO][6150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.146 [INFO][6166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.146 [INFO][6166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.147 [INFO][6166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.174 [WARNING][6166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.174 [INFO][6166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.182 [INFO][6166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:00.199951 containerd[2027]: 2025-09-12 23:57:00.192 [INFO][6150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.201200 containerd[2027]: time="2025-09-12T23:57:00.201119273Z" level=info msg="TearDown network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\" successfully" Sep 12 23:57:00.201367 containerd[2027]: time="2025-09-12T23:57:00.201337445Z" level=info msg="StopPodSandbox for \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\" returns successfully" Sep 12 23:57:00.204739 containerd[2027]: time="2025-09-12T23:57:00.204660509Z" level=info msg="RemovePodSandbox for \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\"" Sep 12 23:57:00.204739 containerd[2027]: time="2025-09-12T23:57:00.204725789Z" level=info msg="Forcibly stopping sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\"" Sep 12 23:57:00.243614 systemd[1]: Started cri-containerd-298015a5d6fa0191698699b49861fe38996492075716e50856aed7808d7badcc.scope - libcontainer container 298015a5d6fa0191698699b49861fe38996492075716e50856aed7808d7badcc. Sep 12 23:57:00.380724 containerd[2027]: time="2025-09-12T23:57:00.379868514Z" level=info msg="StartContainer for \"298015a5d6fa0191698699b49861fe38996492075716e50856aed7808d7badcc\" returns successfully" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.315 [WARNING][6198] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e0c64f6a-d5cd-4c90-b7b6-77627d265c99", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"30e4a2d9d02bbb771f346a9de389fa9983c06374a890205d4c350680b42cf077", Pod:"goldmane-54d579b49d-6mb7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidfd04e1f21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.318 [INFO][6198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.318 [INFO][6198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" iface="eth0" netns="" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.318 [INFO][6198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.318 [INFO][6198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.382 [INFO][6212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.383 [INFO][6212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.384 [INFO][6212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.406 [WARNING][6212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.406 [INFO][6212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" HandleID="k8s-pod-network.973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Workload="ip--172--31--20--162-k8s-goldmane--54d579b49d--6mb7r-eth0" Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.413 [INFO][6212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:00.419220 containerd[2027]: 2025-09-12 23:57:00.415 [INFO][6198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34" Sep 12 23:57:00.419220 containerd[2027]: time="2025-09-12T23:57:00.418114794Z" level=info msg="TearDown network for sandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\" successfully" Sep 12 23:57:00.433939 containerd[2027]: time="2025-09-12T23:57:00.433832514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:57:00.434081 containerd[2027]: time="2025-09-12T23:57:00.433954614Z" level=info msg="RemovePodSandbox \"973a2d0e1bc88e88dbb355fac81562fca44561ed8f455873912bde1bd200dc34\" returns successfully" Sep 12 23:57:00.436290 containerd[2027]: time="2025-09-12T23:57:00.436136298Z" level=info msg="StopPodSandbox for \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\"" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.542 [WARNING][6236] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f60d9f74-1f96-4c24-9612-25c7fd4febe8", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42", Pod:"csi-node-driver-zn6sm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93221732749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.543 [INFO][6236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.543 [INFO][6236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" iface="eth0" netns="" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.543 [INFO][6236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.543 [INFO][6236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.605 [INFO][6247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.605 [INFO][6247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.605 [INFO][6247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.618 [WARNING][6247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.618 [INFO][6247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.621 [INFO][6247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:00.627269 containerd[2027]: 2025-09-12 23:57:00.624 [INFO][6236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.627269 containerd[2027]: time="2025-09-12T23:57:00.627100099Z" level=info msg="TearDown network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\" successfully" Sep 12 23:57:00.627269 containerd[2027]: time="2025-09-12T23:57:00.627136135Z" level=info msg="StopPodSandbox for \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\" returns successfully" Sep 12 23:57:00.629128 containerd[2027]: time="2025-09-12T23:57:00.628447111Z" level=info msg="RemovePodSandbox for \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\"" Sep 12 23:57:00.629128 containerd[2027]: time="2025-09-12T23:57:00.628498555Z" level=info msg="Forcibly stopping sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\"" Sep 12 23:57:00.631881 systemd-networkd[1851]: vxlan.calico: Gained IPv6LL Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.699 [WARNING][6262] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f60d9f74-1f96-4c24-9612-25c7fd4febe8", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42", Pod:"csi-node-driver-zn6sm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93221732749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.699 [INFO][6262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.699 [INFO][6262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" iface="eth0" netns="" Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.699 [INFO][6262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.699 [INFO][6262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.743 [INFO][6269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.744 [INFO][6269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.744 [INFO][6269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.757 [WARNING][6269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.757 [INFO][6269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" HandleID="k8s-pod-network.05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Workload="ip--172--31--20--162-k8s-csi--node--driver--zn6sm-eth0" Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.759 [INFO][6269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:00.765723 containerd[2027]: 2025-09-12 23:57:00.762 [INFO][6262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5" Sep 12 23:57:00.768057 containerd[2027]: time="2025-09-12T23:57:00.765746612Z" level=info msg="TearDown network for sandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\" successfully" Sep 12 23:57:00.772697 containerd[2027]: time="2025-09-12T23:57:00.772628576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:57:00.772847 containerd[2027]: time="2025-09-12T23:57:00.772727984Z" level=info msg="RemovePodSandbox \"05acfc06bfc2ffedc24e6c9516025e7a101491f24b4765816a498aad257f3dc5\" returns successfully" Sep 12 23:57:00.773606 containerd[2027]: time="2025-09-12T23:57:00.773509040Z" level=info msg="StopPodSandbox for \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\"" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.853 [WARNING][6283] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeb53fe4-c09c-4147-b78d-113cabffc7ee", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3", Pod:"calico-apiserver-88f4475-4dq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb663e36f9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.854 [INFO][6283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.854 [INFO][6283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" iface="eth0" netns="" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.854 [INFO][6283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.854 [INFO][6283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.897 [INFO][6290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.897 [INFO][6290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.897 [INFO][6290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.911 [WARNING][6290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.911 [INFO][6290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.914 [INFO][6290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:00.921217 containerd[2027]: 2025-09-12 23:57:00.916 [INFO][6283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:00.921217 containerd[2027]: time="2025-09-12T23:57:00.919570521Z" level=info msg="TearDown network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\" successfully" Sep 12 23:57:00.921217 containerd[2027]: time="2025-09-12T23:57:00.919608777Z" level=info msg="StopPodSandbox for \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\" returns successfully" Sep 12 23:57:00.924076 containerd[2027]: time="2025-09-12T23:57:00.923494905Z" level=info msg="RemovePodSandbox for \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\"" Sep 12 23:57:00.924076 containerd[2027]: time="2025-09-12T23:57:00.923609301Z" level=info msg="Forcibly stopping sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\"" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:00.995 [WARNING][6304] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0", GenerateName:"calico-apiserver-88f4475-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeb53fe4-c09c-4147-b78d-113cabffc7ee", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"88f4475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3", Pod:"calico-apiserver-88f4475-4dq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb663e36f9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:00.996 [INFO][6304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:00.996 [INFO][6304] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" iface="eth0" netns="" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:00.996 [INFO][6304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:00.996 [INFO][6304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:01.039 [INFO][6311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:01.039 [INFO][6311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:01.039 [INFO][6311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:01.054 [WARNING][6311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:01.054 [INFO][6311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" HandleID="k8s-pod-network.fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Workload="ip--172--31--20--162-k8s-calico--apiserver--88f4475--4dq8d-eth0" Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:01.057 [INFO][6311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:01.063643 containerd[2027]: 2025-09-12 23:57:01.060 [INFO][6304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83" Sep 12 23:57:01.063643 containerd[2027]: time="2025-09-12T23:57:01.063632574Z" level=info msg="TearDown network for sandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\" successfully" Sep 12 23:57:01.070334 containerd[2027]: time="2025-09-12T23:57:01.070257102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:57:01.070483 containerd[2027]: time="2025-09-12T23:57:01.070398198Z" level=info msg="RemovePodSandbox \"fc0a16c730f0d5727a65f6ed08d430cde6c64aaa68687541cf4d7be844b8ba83\" returns successfully" Sep 12 23:57:01.071613 containerd[2027]: time="2025-09-12T23:57:01.071047506Z" level=info msg="StopPodSandbox for \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\"" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.135 [WARNING][6325] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c1a3fe73-b569-44ae-8604-d1dae69d130b", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8", Pod:"coredns-668d6bf9bc-xpl6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf4aefc9ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.135 [INFO][6325] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.135 [INFO][6325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" iface="eth0" netns="" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.135 [INFO][6325] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.135 [INFO][6325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.175 [INFO][6333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.175 [INFO][6333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.175 [INFO][6333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.190 [WARNING][6333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.191 [INFO][6333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.198 [INFO][6333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:01.211989 containerd[2027]: 2025-09-12 23:57:01.207 [INFO][6325] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.215597 containerd[2027]: time="2025-09-12T23:57:01.214723122Z" level=info msg="TearDown network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\" successfully" Sep 12 23:57:01.215597 containerd[2027]: time="2025-09-12T23:57:01.214772394Z" level=info msg="StopPodSandbox for \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\" returns successfully" Sep 12 23:57:01.216334 containerd[2027]: time="2025-09-12T23:57:01.216235326Z" level=info msg="RemovePodSandbox for \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\"" Sep 12 23:57:01.216410 containerd[2027]: time="2025-09-12T23:57:01.216329970Z" level=info msg="Forcibly stopping sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\"" Sep 12 23:57:01.386754 kubelet[3403]: I0912 23:57:01.386666 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-6mb7r" podStartSLOduration=24.652040181 podStartE2EDuration="36.386639203s" podCreationTimestamp="2025-09-12 23:56:25 +0000 UTC" firstStartedPulling="2025-09-12 23:56:48.24136389 +0000 UTC m=+52.070968268" lastFinishedPulling="2025-09-12 23:56:59.975962924 +0000 UTC m=+63.805567290" observedRunningTime="2025-09-12 23:57:01.385844275 +0000 UTC m=+65.215448689" watchObservedRunningTime="2025-09-12 23:57:01.386639203 +0000 UTC m=+65.216243677" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.298 [WARNING][6347] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c1a3fe73-b569-44ae-8604-d1dae69d130b", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-162", ContainerID:"b200eb3f3a610beb677b4606f1c7bd935b6832a243c933b2757f5e4e70ee4eb8", Pod:"coredns-668d6bf9bc-xpl6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf4aefc9ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.299 [INFO][6347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.299 [INFO][6347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" iface="eth0" netns="" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.299 [INFO][6347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.299 [INFO][6347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.391 [INFO][6358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.393 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.393 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.422 [WARNING][6358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.422 [INFO][6358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" HandleID="k8s-pod-network.2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Workload="ip--172--31--20--162-k8s-coredns--668d6bf9bc--xpl6r-eth0" Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.428 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:57:01.448366 containerd[2027]: 2025-09-12 23:57:01.440 [INFO][6347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05" Sep 12 23:57:01.448366 containerd[2027]: time="2025-09-12T23:57:01.447297187Z" level=info msg="TearDown network for sandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\" successfully" Sep 12 23:57:01.477843 containerd[2027]: time="2025-09-12T23:57:01.476571392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:57:01.477843 containerd[2027]: time="2025-09-12T23:57:01.476841068Z" level=info msg="RemovePodSandbox \"2cc5b53408a7d900876354bddbe2db02c5c154e2a17093cc7e8866f0458f1a05\" returns successfully" Sep 12 23:57:01.646117 containerd[2027]: time="2025-09-12T23:57:01.646046312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:01.648757 containerd[2027]: time="2025-09-12T23:57:01.648655892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 23:57:01.651619 containerd[2027]: time="2025-09-12T23:57:01.651497156Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:01.661255 containerd[2027]: time="2025-09-12T23:57:01.658631768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:01.661255 containerd[2027]: time="2025-09-12T23:57:01.660269948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.680139676s" Sep 12 23:57:01.661255 containerd[2027]: time="2025-09-12T23:57:01.660318128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 23:57:01.664034 containerd[2027]: time="2025-09-12T23:57:01.663729369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:57:01.679895 containerd[2027]: time="2025-09-12T23:57:01.679841193Z" level=info msg="CreateContainer within sandbox \"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:57:01.741377 containerd[2027]: time="2025-09-12T23:57:01.741198549Z" level=info msg="CreateContainer within sandbox \"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"20a424290b47f9cfc190e1ddffc8cf377722987dd38e3754aaa5b2307728126a\"" Sep 12 23:57:01.744351 containerd[2027]: time="2025-09-12T23:57:01.744266109Z" level=info msg="StartContainer for \"20a424290b47f9cfc190e1ddffc8cf377722987dd38e3754aaa5b2307728126a\"" Sep 12 23:57:01.830906 systemd[1]: Started cri-containerd-20a424290b47f9cfc190e1ddffc8cf377722987dd38e3754aaa5b2307728126a.scope - libcontainer container 20a424290b47f9cfc190e1ddffc8cf377722987dd38e3754aaa5b2307728126a. Sep 12 23:57:01.889231 containerd[2027]: time="2025-09-12T23:57:01.889148002Z" level=info msg="StartContainer for \"20a424290b47f9cfc190e1ddffc8cf377722987dd38e3754aaa5b2307728126a\" returns successfully" Sep 12 23:57:03.156099 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.55.64:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.55.64:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 8 cali84c5c90bab2 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 9 calidfd04e1f21b [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 10 calia444d5d2f8e [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 11 cali93221732749 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 12 cali989d207671d [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 13 caliddf4aefc9ee [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 14 califb663e36f9b [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 15 cali0b8938994c7 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 23:57:03.157429 ntpd[1990]: 12 Sep 23:57:03 ntpd[1990]: Listen normally on 16 vxlan.calico [fe80::6493:ddff:fedc:8dc2%12]:123 Sep 12 23:57:03.156226 ntpd[1990]: Listen normally on 8 cali84c5c90bab2 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 23:57:03.156306 ntpd[1990]: Listen normally on 9 calidfd04e1f21b [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 23:57:03.156376 ntpd[1990]: Listen normally on 10 calia444d5d2f8e [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 23:57:03.156445 ntpd[1990]: Listen normally on 11 cali93221732749 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 23:57:03.156571 ntpd[1990]: Listen normally on 12 cali989d207671d [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 23:57:03.156651 ntpd[1990]: Listen normally on 13 caliddf4aefc9ee [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 23:57:03.156723 ntpd[1990]: Listen normally on 14 califb663e36f9b [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 23:57:03.156791 ntpd[1990]: Listen normally on 15 cali0b8938994c7 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 23:57:03.156860 ntpd[1990]: Listen normally on 16 vxlan.calico [fe80::6493:ddff:fedc:8dc2%12]:123 Sep 12 23:57:04.636792 systemd[1]: Started sshd@9-172.31.20.162:22-147.75.109.163:38752.service - OpenSSH per-connection server daemon (147.75.109.163:38752). Sep 12 23:57:04.847880 sshd[6428]: Accepted publickey for core from 147.75.109.163 port 38752 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:04.852801 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:04.865333 systemd-logind[1997]: New session 10 of user core. Sep 12 23:57:04.875977 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:57:05.260815 sshd[6428]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:05.271347 systemd[1]: sshd@9-172.31.20.162:22-147.75.109.163:38752.service: Deactivated successfully. Sep 12 23:57:05.282980 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:57:05.289200 systemd-logind[1997]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:57:05.308266 systemd[1]: Started sshd@10-172.31.20.162:22-147.75.109.163:38760.service - OpenSSH per-connection server daemon (147.75.109.163:38760). Sep 12 23:57:05.310405 systemd-logind[1997]: Removed session 10. Sep 12 23:57:05.528765 sshd[6454]: Accepted publickey for core from 147.75.109.163 port 38760 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:05.534530 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:05.551783 systemd-logind[1997]: New session 11 of user core. Sep 12 23:57:05.560635 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:57:05.971994 sshd[6454]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:05.989117 systemd[1]: sshd@10-172.31.20.162:22-147.75.109.163:38760.service: Deactivated successfully. Sep 12 23:57:06.001191 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:57:06.026385 systemd-logind[1997]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:57:06.038046 systemd[1]: Started sshd@11-172.31.20.162:22-147.75.109.163:38776.service - OpenSSH per-connection server daemon (147.75.109.163:38776). Sep 12 23:57:06.045099 systemd-logind[1997]: Removed session 11. Sep 12 23:57:06.250322 sshd[6465]: Accepted publickey for core from 147.75.109.163 port 38776 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:06.255676 sshd[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:06.272218 systemd-logind[1997]: New session 12 of user core. Sep 12 23:57:06.277601 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:57:06.549533 containerd[2027]: time="2025-09-12T23:57:06.548064373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:06.553100 containerd[2027]: time="2025-09-12T23:57:06.553005505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 23:57:06.556170 containerd[2027]: time="2025-09-12T23:57:06.556096765Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:06.563363 containerd[2027]: time="2025-09-12T23:57:06.563307085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:06.565054 containerd[2027]: time="2025-09-12T23:57:06.564985501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.900927344s" Sep 12 23:57:06.565234 containerd[2027]: time="2025-09-12T23:57:06.565203673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:57:06.568974 containerd[2027]: time="2025-09-12T23:57:06.568909357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 23:57:06.572876 containerd[2027]: time="2025-09-12T23:57:06.572807689Z" level=info msg="CreateContainer within sandbox \"36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:57:06.609891 containerd[2027]: time="2025-09-12T23:57:06.608030833Z" level=info msg="CreateContainer within sandbox \"36e01e1a1db8c081c5add5e7d150d15090f249033a6a02cec0d582e118f36f33\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9306973e085af9df6c576eb793c3c1f1721e3e5d84b562ab8a44908c243ba2f6\"" Sep 12 23:57:06.610581 containerd[2027]: time="2025-09-12T23:57:06.610425133Z" level=info msg="StartContainer for \"9306973e085af9df6c576eb793c3c1f1721e3e5d84b562ab8a44908c243ba2f6\"" Sep 12 23:57:06.630144 sshd[6465]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:06.649672 systemd[1]: sshd@11-172.31.20.162:22-147.75.109.163:38776.service: Deactivated successfully. Sep 12 23:57:06.656572 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:57:06.663069 systemd-logind[1997]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:57:06.667855 systemd-logind[1997]: Removed session 12. Sep 12 23:57:06.689186 systemd[1]: Started cri-containerd-9306973e085af9df6c576eb793c3c1f1721e3e5d84b562ab8a44908c243ba2f6.scope - libcontainer container 9306973e085af9df6c576eb793c3c1f1721e3e5d84b562ab8a44908c243ba2f6. Sep 12 23:57:06.789753 containerd[2027]: time="2025-09-12T23:57:06.789689990Z" level=info msg="StartContainer for \"9306973e085af9df6c576eb793c3c1f1721e3e5d84b562ab8a44908c243ba2f6\" returns successfully" Sep 12 23:57:07.431178 kubelet[3403]: I0912 23:57:07.430823 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-88f4475-t5fsb" podStartSLOduration=42.461472373 podStartE2EDuration="54.430799881s" podCreationTimestamp="2025-09-12 23:56:13 +0000 UTC" firstStartedPulling="2025-09-12 23:56:54.598802545 +0000 UTC m=+58.428406923" lastFinishedPulling="2025-09-12 23:57:06.568130065 +0000 UTC m=+70.397734431" observedRunningTime="2025-09-12 23:57:07.421087453 +0000 UTC m=+71.250692047" watchObservedRunningTime="2025-09-12 23:57:07.430799881 +0000 UTC m=+71.260404259" Sep 12 23:57:08.390273 kubelet[3403]: I0912 23:57:08.390171 3403 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:57:09.916233 containerd[2027]: time="2025-09-12T23:57:09.916177134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:09.919563 containerd[2027]: time="2025-09-12T23:57:09.919488018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 23:57:09.922060 containerd[2027]: time="2025-09-12T23:57:09.921119022Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:09.926987 containerd[2027]: time="2025-09-12T23:57:09.926924802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.357947153s" Sep 12 23:57:09.927299 containerd[2027]: time="2025-09-12T23:57:09.927237126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 23:57:09.927495 containerd[2027]: time="2025-09-12T23:57:09.927158202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:09.934986 containerd[2027]: time="2025-09-12T23:57:09.934518258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:57:09.964154 containerd[2027]: time="2025-09-12T23:57:09.964096218Z" level=info msg="CreateContainer within sandbox \"768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 23:57:10.019624 containerd[2027]: time="2025-09-12T23:57:10.018336698Z" level=info msg="CreateContainer within sandbox \"768c15dcdb00d3acffd72dfcee81aeb1e433c95105af85dce43a6a9c8aaacb73\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0649f3a084931ece59b295de7643ccb289b09f9fe366a46c47127bb2f2f961e7\"" Sep 12 23:57:10.020988 containerd[2027]: time="2025-09-12T23:57:10.020743694Z" level=info msg="StartContainer for \"0649f3a084931ece59b295de7643ccb289b09f9fe366a46c47127bb2f2f961e7\"" Sep 12 23:57:10.029667 kubelet[3403]: I0912 23:57:10.029612 3403 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:57:10.120000 systemd[1]: Started cri-containerd-0649f3a084931ece59b295de7643ccb289b09f9fe366a46c47127bb2f2f961e7.scope - libcontainer container 0649f3a084931ece59b295de7643ccb289b09f9fe366a46c47127bb2f2f961e7. Sep 12 23:57:10.240608 containerd[2027]: time="2025-09-12T23:57:10.240401091Z" level=info msg="StartContainer for \"0649f3a084931ece59b295de7643ccb289b09f9fe366a46c47127bb2f2f961e7\" returns successfully" Sep 12 23:57:10.299993 containerd[2027]: time="2025-09-12T23:57:10.299840775Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:10.304694 containerd[2027]: time="2025-09-12T23:57:10.304622931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 23:57:10.309914 containerd[2027]: time="2025-09-12T23:57:10.309815955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 374.599657ms" Sep 12 23:57:10.309914 containerd[2027]: time="2025-09-12T23:57:10.309907119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:57:10.312271 containerd[2027]: time="2025-09-12T23:57:10.312186819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:57:10.317688 containerd[2027]: time="2025-09-12T23:57:10.317084739Z" level=info msg="CreateContainer within sandbox \"71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:57:10.351520 containerd[2027]: time="2025-09-12T23:57:10.351008476Z" level=info msg="CreateContainer within sandbox \"71655d1d20440f0c3b79968b00abdd0a6325feeb9c025ebcb3392dba962b95d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d6c3c7b35c4d0db6641572e30bf2c1577971d5a98088f0ee0d3b811c5d814e31\"" Sep 12 23:57:10.352141 containerd[2027]: time="2025-09-12T23:57:10.352084744Z" level=info msg="StartContainer for \"d6c3c7b35c4d0db6641572e30bf2c1577971d5a98088f0ee0d3b811c5d814e31\"" Sep 12 23:57:10.467885 systemd[1]: Started cri-containerd-d6c3c7b35c4d0db6641572e30bf2c1577971d5a98088f0ee0d3b811c5d814e31.scope - libcontainer container d6c3c7b35c4d0db6641572e30bf2c1577971d5a98088f0ee0d3b811c5d814e31. Sep 12 23:57:10.645370 containerd[2027]: time="2025-09-12T23:57:10.645205457Z" level=info msg="StartContainer for \"d6c3c7b35c4d0db6641572e30bf2c1577971d5a98088f0ee0d3b811c5d814e31\" returns successfully" Sep 12 23:57:11.482159 kubelet[3403]: I0912 23:57:11.480618 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-88f4475-4dq8d" podStartSLOduration=44.178610594 podStartE2EDuration="58.480594029s" podCreationTimestamp="2025-09-12 23:56:13 +0000 UTC" firstStartedPulling="2025-09-12 23:56:56.009322308 +0000 UTC m=+59.838926686" lastFinishedPulling="2025-09-12 23:57:10.311305755 +0000 UTC m=+74.140910121" observedRunningTime="2025-09-12 23:57:11.470793629 +0000 UTC m=+75.300398031" watchObservedRunningTime="2025-09-12 23:57:11.480594029 +0000 UTC m=+75.310198407" Sep 12 23:57:11.484876 kubelet[3403]: I0912 23:57:11.481501 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bc58cd45c-f245z" podStartSLOduration=32.541450023 podStartE2EDuration="46.481485425s" podCreationTimestamp="2025-09-12 23:56:25 +0000 UTC" firstStartedPulling="2025-09-12 23:56:55.989989684 +0000 UTC m=+59.819594050" lastFinishedPulling="2025-09-12 23:57:09.930025086 +0000 UTC m=+73.759629452" observedRunningTime="2025-09-12 23:57:10.449641696 +0000 UTC m=+74.279246194" watchObservedRunningTime="2025-09-12 23:57:11.481485425 +0000 UTC m=+75.311089803" Sep 12 23:57:11.684828 systemd[1]: Started sshd@12-172.31.20.162:22-147.75.109.163:48710.service - OpenSSH per-connection server daemon (147.75.109.163:48710). Sep 12 23:57:11.949524 sshd[6693]: Accepted publickey for core from 147.75.109.163 port 48710 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:11.955776 sshd[6693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:11.972428 systemd-logind[1997]: New session 13 of user core. Sep 12 23:57:11.978945 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:57:12.441875 sshd[6693]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:12.463699 systemd[1]: sshd@12-172.31.20.162:22-147.75.109.163:48710.service: Deactivated successfully. Sep 12 23:57:12.468385 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:57:12.480562 systemd-logind[1997]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:57:12.486886 systemd-logind[1997]: Removed session 13. Sep 12 23:57:12.565587 containerd[2027]: time="2025-09-12T23:57:12.565236835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:12.569139 containerd[2027]: time="2025-09-12T23:57:12.568789351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 23:57:12.571597 containerd[2027]: time="2025-09-12T23:57:12.571275547Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:12.584887 containerd[2027]: time="2025-09-12T23:57:12.582212431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:57:12.586594 containerd[2027]: time="2025-09-12T23:57:12.585782731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.27352468s" Sep 12 23:57:12.586594 containerd[2027]: time="2025-09-12T23:57:12.585863359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 23:57:12.595470 containerd[2027]: time="2025-09-12T23:57:12.595398811Z" level=info msg="CreateContainer within sandbox \"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:57:12.638571 containerd[2027]: time="2025-09-12T23:57:12.638464219Z" level=info msg="CreateContainer within sandbox \"1efc2f99f6dafcaaa9457f42f63fd11b1e034deb14d63cf4ec46c9ac19d81c42\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e961ec3a6ecdf63d804df8701363e327ab4206270e030560582d07d146f2cceb\"" Sep 12 23:57:12.643939 containerd[2027]: time="2025-09-12T23:57:12.643851439Z" level=info msg="StartContainer for \"e961ec3a6ecdf63d804df8701363e327ab4206270e030560582d07d146f2cceb\"" Sep 12 23:57:12.651603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752927818.mount: Deactivated successfully. Sep 12 23:57:12.741450 systemd[1]: Started cri-containerd-e961ec3a6ecdf63d804df8701363e327ab4206270e030560582d07d146f2cceb.scope - libcontainer container e961ec3a6ecdf63d804df8701363e327ab4206270e030560582d07d146f2cceb. Sep 12 23:57:12.877406 containerd[2027]: time="2025-09-12T23:57:12.877338692Z" level=info msg="StartContainer for \"e961ec3a6ecdf63d804df8701363e327ab4206270e030560582d07d146f2cceb\" returns successfully" Sep 12 23:57:13.713633 kubelet[3403]: I0912 23:57:13.713362 3403 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:57:13.713633 kubelet[3403]: I0912 23:57:13.713420 3403 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:57:15.300909 kubelet[3403]: I0912 23:57:15.300687 3403 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zn6sm" podStartSLOduration=27.735207396 podStartE2EDuration="50.300512324s" podCreationTimestamp="2025-09-12 23:56:25 +0000 UTC" firstStartedPulling="2025-09-12 23:56:50.024525367 +0000 UTC m=+53.854129745" lastFinishedPulling="2025-09-12 23:57:12.589830295 +0000 UTC m=+76.419434673" observedRunningTime="2025-09-12 23:57:13.467526991 +0000 UTC m=+77.297131393" watchObservedRunningTime="2025-09-12 23:57:15.300512324 +0000 UTC m=+79.130116762" Sep 12 23:57:17.488226 systemd[1]: Started sshd@13-172.31.20.162:22-147.75.109.163:48722.service - OpenSSH per-connection server daemon (147.75.109.163:48722). Sep 12 23:57:17.685761 sshd[6776]: Accepted publickey for core from 147.75.109.163 port 48722 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:17.689057 sshd[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:17.696696 systemd-logind[1997]: New session 14 of user core. Sep 12 23:57:17.707893 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:57:17.982318 sshd[6776]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:17.989301 systemd[1]: sshd@13-172.31.20.162:22-147.75.109.163:48722.service: Deactivated successfully. Sep 12 23:57:17.993167 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:57:17.995692 systemd-logind[1997]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:57:17.998090 systemd-logind[1997]: Removed session 14. Sep 12 23:57:23.024102 systemd[1]: Started sshd@14-172.31.20.162:22-147.75.109.163:50590.service - OpenSSH per-connection server daemon (147.75.109.163:50590). Sep 12 23:57:23.213594 sshd[6800]: Accepted publickey for core from 147.75.109.163 port 50590 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:23.215692 sshd[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:23.223927 systemd-logind[1997]: New session 15 of user core. Sep 12 23:57:23.234167 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:57:23.534049 sshd[6800]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:23.544973 systemd[1]: sshd@14-172.31.20.162:22-147.75.109.163:50590.service: Deactivated successfully. Sep 12 23:57:23.557184 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:57:23.565321 systemd-logind[1997]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:57:23.572768 systemd-logind[1997]: Removed session 15. Sep 12 23:57:23.632779 systemd[1]: run-containerd-runc-k8s.io-0649f3a084931ece59b295de7643ccb289b09f9fe366a46c47127bb2f2f961e7-runc.CXpqMM.mount: Deactivated successfully. Sep 12 23:57:28.581742 systemd[1]: Started sshd@15-172.31.20.162:22-147.75.109.163:50594.service - OpenSSH per-connection server daemon (147.75.109.163:50594). Sep 12 23:57:28.781911 sshd[6831]: Accepted publickey for core from 147.75.109.163 port 50594 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:28.790124 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:28.808640 systemd-logind[1997]: New session 16 of user core. Sep 12 23:57:28.818840 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:57:29.110947 sshd[6831]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:29.116807 systemd[1]: sshd@15-172.31.20.162:22-147.75.109.163:50594.service: Deactivated successfully. Sep 12 23:57:29.123040 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:57:29.125631 systemd-logind[1997]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:57:29.128011 systemd-logind[1997]: Removed session 16. Sep 12 23:57:29.149229 systemd[1]: Started sshd@16-172.31.20.162:22-147.75.109.163:50596.service - OpenSSH per-connection server daemon (147.75.109.163:50596). Sep 12 23:57:29.335678 sshd[6843]: Accepted publickey for core from 147.75.109.163 port 50596 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:29.337479 sshd[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:29.345100 systemd-logind[1997]: New session 17 of user core. Sep 12 23:57:29.351824 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:57:30.010326 sshd[6843]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:30.017684 systemd[1]: sshd@16-172.31.20.162:22-147.75.109.163:50596.service: Deactivated successfully. Sep 12 23:57:30.024782 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:57:30.029324 systemd-logind[1997]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:57:30.056038 systemd[1]: Started sshd@17-172.31.20.162:22-147.75.109.163:56596.service - OpenSSH per-connection server daemon (147.75.109.163:56596). Sep 12 23:57:30.058809 systemd-logind[1997]: Removed session 17. Sep 12 23:57:30.233341 sshd[6854]: Accepted publickey for core from 147.75.109.163 port 56596 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:30.236400 sshd[6854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:30.245209 systemd-logind[1997]: New session 18 of user core. Sep 12 23:57:30.251797 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:57:31.510332 sshd[6854]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:31.525008 systemd-logind[1997]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:57:31.526326 systemd[1]: sshd@17-172.31.20.162:22-147.75.109.163:56596.service: Deactivated successfully. Sep 12 23:57:31.538599 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:57:31.576079 systemd[1]: Started sshd@18-172.31.20.162:22-147.75.109.163:56612.service - OpenSSH per-connection server daemon (147.75.109.163:56612). Sep 12 23:57:31.579349 systemd-logind[1997]: Removed session 18. Sep 12 23:57:31.789283 sshd[6893]: Accepted publickey for core from 147.75.109.163 port 56612 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:31.793172 sshd[6893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:31.804764 systemd-logind[1997]: New session 19 of user core. Sep 12 23:57:31.813885 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:57:32.378033 sshd[6893]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:32.384328 systemd[1]: sshd@18-172.31.20.162:22-147.75.109.163:56612.service: Deactivated successfully. Sep 12 23:57:32.389304 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:57:32.392092 systemd-logind[1997]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:57:32.394475 systemd-logind[1997]: Removed session 19. Sep 12 23:57:32.419153 systemd[1]: Started sshd@19-172.31.20.162:22-147.75.109.163:56614.service - OpenSSH per-connection server daemon (147.75.109.163:56614). Sep 12 23:57:32.597637 sshd[6908]: Accepted publickey for core from 147.75.109.163 port 56614 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:32.600263 sshd[6908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:32.608644 systemd-logind[1997]: New session 20 of user core. Sep 12 23:57:32.615861 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:57:32.859352 sshd[6908]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:32.866181 systemd[1]: sshd@19-172.31.20.162:22-147.75.109.163:56614.service: Deactivated successfully. Sep 12 23:57:32.870059 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:57:32.871425 systemd-logind[1997]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:57:32.873346 systemd-logind[1997]: Removed session 20. Sep 12 23:57:37.897198 systemd[1]: Started sshd@20-172.31.20.162:22-147.75.109.163:56626.service - OpenSSH per-connection server daemon (147.75.109.163:56626). Sep 12 23:57:38.073236 sshd[6920]: Accepted publickey for core from 147.75.109.163 port 56626 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:38.076887 sshd[6920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:38.084924 systemd-logind[1997]: New session 21 of user core. Sep 12 23:57:38.093826 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:57:38.338730 sshd[6920]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:38.344531 systemd-logind[1997]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:57:38.346039 systemd[1]: sshd@20-172.31.20.162:22-147.75.109.163:56626.service: Deactivated successfully. Sep 12 23:57:38.349397 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:57:38.356206 systemd-logind[1997]: Removed session 21. Sep 12 23:57:43.386124 systemd[1]: Started sshd@21-172.31.20.162:22-147.75.109.163:47502.service - OpenSSH per-connection server daemon (147.75.109.163:47502). Sep 12 23:57:43.559598 sshd[6962]: Accepted publickey for core from 147.75.109.163 port 47502 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:43.562268 sshd[6962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:43.570666 systemd-logind[1997]: New session 22 of user core. Sep 12 23:57:43.578807 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:57:43.835785 sshd[6962]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:43.840703 systemd[1]: sshd@21-172.31.20.162:22-147.75.109.163:47502.service: Deactivated successfully. Sep 12 23:57:43.845596 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:57:43.850187 systemd-logind[1997]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:57:43.853113 systemd-logind[1997]: Removed session 22. Sep 12 23:57:48.884731 systemd[1]: Started sshd@22-172.31.20.162:22-147.75.109.163:47508.service - OpenSSH per-connection server daemon (147.75.109.163:47508). Sep 12 23:57:49.077754 sshd[7000]: Accepted publickey for core from 147.75.109.163 port 47508 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:49.083857 sshd[7000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:49.098584 systemd-logind[1997]: New session 23 of user core. Sep 12 23:57:49.106918 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:57:49.414878 sshd[7000]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:49.421430 systemd[1]: sshd@22-172.31.20.162:22-147.75.109.163:47508.service: Deactivated successfully. Sep 12 23:57:49.425512 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:57:49.431285 systemd-logind[1997]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:57:49.435311 systemd-logind[1997]: Removed session 23. Sep 12 23:57:54.459041 systemd[1]: Started sshd@23-172.31.20.162:22-147.75.109.163:60746.service - OpenSSH per-connection server daemon (147.75.109.163:60746). Sep 12 23:57:54.664086 sshd[7012]: Accepted publickey for core from 147.75.109.163 port 60746 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:57:54.667719 sshd[7012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:57:54.681094 systemd-logind[1997]: New session 24 of user core. Sep 12 23:57:54.689329 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:57:55.068795 sshd[7012]: pam_unix(sshd:session): session closed for user core Sep 12 23:57:55.076632 systemd[1]: sshd@23-172.31.20.162:22-147.75.109.163:60746.service: Deactivated successfully. Sep 12 23:57:55.083297 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:57:55.089923 systemd-logind[1997]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:57:55.092682 systemd-logind[1997]: Removed session 24. Sep 12 23:58:00.112377 systemd[1]: Started sshd@24-172.31.20.162:22-147.75.109.163:38486.service - OpenSSH per-connection server daemon (147.75.109.163:38486). Sep 12 23:58:00.305583 sshd[7027]: Accepted publickey for core from 147.75.109.163 port 38486 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:58:00.308392 sshd[7027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:00.321993 systemd-logind[1997]: New session 25 of user core. Sep 12 23:58:00.329249 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 23:58:00.698264 sshd[7027]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:00.710334 systemd[1]: sshd@24-172.31.20.162:22-147.75.109.163:38486.service: Deactivated successfully. Sep 12 23:58:00.717517 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 23:58:00.721512 systemd-logind[1997]: Session 25 logged out. Waiting for processes to exit. Sep 12 23:58:00.725214 systemd-logind[1997]: Removed session 25. Sep 12 23:58:05.737957 systemd[1]: Started sshd@25-172.31.20.162:22-147.75.109.163:38502.service - OpenSSH per-connection server daemon (147.75.109.163:38502). Sep 12 23:58:05.931277 sshd[7062]: Accepted publickey for core from 147.75.109.163 port 38502 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:58:05.933234 sshd[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:05.944940 systemd-logind[1997]: New session 26 of user core. Sep 12 23:58:05.950195 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 23:58:06.247107 sshd[7062]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:06.254602 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 23:58:06.258343 systemd-logind[1997]: Session 26 logged out. Waiting for processes to exit. Sep 12 23:58:06.260780 systemd[1]: sshd@25-172.31.20.162:22-147.75.109.163:38502.service: Deactivated successfully. Sep 12 23:58:06.273739 systemd-logind[1997]: Removed session 26. Sep 12 23:58:20.133790 systemd[1]: cri-containerd-0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d.scope: Deactivated successfully. Sep 12 23:58:20.134307 systemd[1]: cri-containerd-0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d.scope: Consumed 26.258s CPU time. Sep 12 23:58:20.180322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d-rootfs.mount: Deactivated successfully. Sep 12 23:58:20.202138 containerd[2027]: time="2025-09-12T23:58:20.177744670Z" level=info msg="shim disconnected" id=0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d namespace=k8s.io Sep 12 23:58:20.202138 containerd[2027]: time="2025-09-12T23:58:20.202117847Z" level=warning msg="cleaning up after shim disconnected" id=0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d namespace=k8s.io Sep 12 23:58:20.203083 containerd[2027]: time="2025-09-12T23:58:20.202149695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:20.678225 kubelet[3403]: I0912 23:58:20.677821 3403 scope.go:117] "RemoveContainer" containerID="0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d" Sep 12 23:58:20.682451 containerd[2027]: time="2025-09-12T23:58:20.682392685Z" level=info msg="CreateContainer within sandbox \"6e5b01bf6bb913cade01faf7039f84b073834454837687e87ab17768c3f7a7b2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 23:58:20.716933 containerd[2027]: time="2025-09-12T23:58:20.716759173Z" level=info msg="CreateContainer within sandbox \"6e5b01bf6bb913cade01faf7039f84b073834454837687e87ab17768c3f7a7b2\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb\"" Sep 12 23:58:20.717784 containerd[2027]: time="2025-09-12T23:58:20.717717253Z" level=info msg="StartContainer for \"0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb\"" Sep 12 23:58:20.779893 systemd[1]: Started cri-containerd-0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb.scope - libcontainer container 0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb. Sep 12 23:58:20.835615 containerd[2027]: time="2025-09-12T23:58:20.835335470Z" level=info msg="StartContainer for \"0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb\" returns successfully" Sep 12 23:58:21.179994 systemd[1]: run-containerd-runc-k8s.io-0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb-runc.HG8STx.mount: Deactivated successfully. Sep 12 23:58:21.451806 systemd[1]: cri-containerd-e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c.scope: Deactivated successfully. Sep 12 23:58:21.452602 systemd[1]: cri-containerd-e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c.scope: Consumed 6.122s CPU time, 17.7M memory peak, 0B memory swap peak. Sep 12 23:58:21.503881 containerd[2027]: time="2025-09-12T23:58:21.503529073Z" level=info msg="shim disconnected" id=e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c namespace=k8s.io Sep 12 23:58:21.503881 containerd[2027]: time="2025-09-12T23:58:21.503631481Z" level=warning msg="cleaning up after shim disconnected" id=e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c namespace=k8s.io Sep 12 23:58:21.503881 containerd[2027]: time="2025-09-12T23:58:21.503653189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:21.509162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c-rootfs.mount: Deactivated successfully. Sep 12 23:58:21.687680 kubelet[3403]: I0912 23:58:21.687293 3403 scope.go:117] "RemoveContainer" containerID="e4df67986651b0b1829f64eb3da16e054ab9458b6d264f742bfc5a7abed3f56c" Sep 12 23:58:21.690749 containerd[2027]: time="2025-09-12T23:58:21.690691910Z" level=info msg="CreateContainer within sandbox \"06f53ec297e4abf438dafbea7d2678b9c0c76ea8dfe0261c74f722f82f37db58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 23:58:21.723171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281854504.mount: Deactivated successfully. Sep 12 23:58:21.734500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496246699.mount: Deactivated successfully. Sep 12 23:58:21.744060 containerd[2027]: time="2025-09-12T23:58:21.743984198Z" level=info msg="CreateContainer within sandbox \"06f53ec297e4abf438dafbea7d2678b9c0c76ea8dfe0261c74f722f82f37db58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"92b9cab59d751074f01c8450225961ac1b3aaa6007205a87548c82ae8104c950\"" Sep 12 23:58:21.744750 containerd[2027]: time="2025-09-12T23:58:21.744693854Z" level=info msg="StartContainer for \"92b9cab59d751074f01c8450225961ac1b3aaa6007205a87548c82ae8104c950\"" Sep 12 23:58:21.788854 systemd[1]: Started cri-containerd-92b9cab59d751074f01c8450225961ac1b3aaa6007205a87548c82ae8104c950.scope - libcontainer container 92b9cab59d751074f01c8450225961ac1b3aaa6007205a87548c82ae8104c950. Sep 12 23:58:21.863080 containerd[2027]: time="2025-09-12T23:58:21.862919895Z" level=info msg="StartContainer for \"92b9cab59d751074f01c8450225961ac1b3aaa6007205a87548c82ae8104c950\" returns successfully" Sep 12 23:58:24.724582 systemd[1]: cri-containerd-63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb.scope: Deactivated successfully. Sep 12 23:58:24.725159 systemd[1]: cri-containerd-63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb.scope: Consumed 3.819s CPU time, 15.5M memory peak, 0B memory swap peak. Sep 12 23:58:24.776739 containerd[2027]: time="2025-09-12T23:58:24.774620657Z" level=info msg="shim disconnected" id=63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb namespace=k8s.io Sep 12 23:58:24.776739 containerd[2027]: time="2025-09-12T23:58:24.776687837Z" level=warning msg="cleaning up after shim disconnected" id=63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb namespace=k8s.io Sep 12 23:58:24.776739 containerd[2027]: time="2025-09-12T23:58:24.776715257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:24.779201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb-rootfs.mount: Deactivated successfully. Sep 12 23:58:25.704896 kubelet[3403]: I0912 23:58:25.704853 3403 scope.go:117] "RemoveContainer" containerID="63767666f5332aa379fc00014d5468f8f9e38fb604df99e2ca494e2eeffad0cb" Sep 12 23:58:25.708749 containerd[2027]: time="2025-09-12T23:58:25.708343338Z" level=info msg="CreateContainer within sandbox \"44352076bd3e89d4fa68eeca30914a7886ee01e55e601d8ca826f29acd44622b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 23:58:25.750053 containerd[2027]: time="2025-09-12T23:58:25.749976798Z" level=info msg="CreateContainer within sandbox \"44352076bd3e89d4fa68eeca30914a7886ee01e55e601d8ca826f29acd44622b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"99310afe491ae0bb927ce2add7683031739c5e02a57df66745ee66411182a43d\"" Sep 12 23:58:25.750930 containerd[2027]: time="2025-09-12T23:58:25.750840426Z" level=info msg="StartContainer for \"99310afe491ae0bb927ce2add7683031739c5e02a57df66745ee66411182a43d\"" Sep 12 23:58:25.817883 systemd[1]: Started cri-containerd-99310afe491ae0bb927ce2add7683031739c5e02a57df66745ee66411182a43d.scope - libcontainer container 99310afe491ae0bb927ce2add7683031739c5e02a57df66745ee66411182a43d. Sep 12 23:58:25.892851 containerd[2027]: time="2025-09-12T23:58:25.892734535Z" level=info msg="StartContainer for \"99310afe491ae0bb927ce2add7683031739c5e02a57df66745ee66411182a43d\" returns successfully" Sep 12 23:58:29.243310 kubelet[3403]: E0912 23:58:29.242468 3403 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-162?timeout=10s\": context deadline exceeded" Sep 12 23:58:32.290426 systemd[1]: cri-containerd-0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb.scope: Deactivated successfully. Sep 12 23:58:32.331300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb-rootfs.mount: Deactivated successfully. Sep 12 23:58:32.344775 containerd[2027]: time="2025-09-12T23:58:32.344402639Z" level=info msg="shim disconnected" id=0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb namespace=k8s.io Sep 12 23:58:32.344775 containerd[2027]: time="2025-09-12T23:58:32.344476043Z" level=warning msg="cleaning up after shim disconnected" id=0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb namespace=k8s.io Sep 12 23:58:32.344775 containerd[2027]: time="2025-09-12T23:58:32.344497379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:58:32.738457 kubelet[3403]: I0912 23:58:32.737313 3403 scope.go:117] "RemoveContainer" containerID="0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d" Sep 12 23:58:32.738457 kubelet[3403]: I0912 23:58:32.737780 3403 scope.go:117] "RemoveContainer" containerID="0909bacc7df3940336591a605c9a30f927b6e33bd1abc493d9db770d41b53fcb" Sep 12 23:58:32.741319 kubelet[3403]: E0912 23:58:32.739940 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-6vnws_tigera-operator(621ac99b-38dd-4570-a503-72b598be17ca)\"" pod="tigera-operator/tigera-operator-755d956888-6vnws" podUID="621ac99b-38dd-4570-a503-72b598be17ca" Sep 12 23:58:32.742437 containerd[2027]: time="2025-09-12T23:58:32.742135873Z" level=info msg="RemoveContainer for \"0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d\"" Sep 12 23:58:32.749651 containerd[2027]: time="2025-09-12T23:58:32.749456629Z" level=info msg="RemoveContainer for \"0718af731ff552d291d1a7acd8ab51724e9c12ba172692313a6acca6ef8b6c3d\" returns successfully" Sep 12 23:58:39.243726 kubelet[3403]: E0912 23:58:39.243113 3403 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-162?timeout=10s\": context deadline exceeded"