Sep 5 23:52:49.219305 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 5 23:52:49.219349 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:52:49.219373 kernel: KASLR disabled due to lack of seed Sep 5 23:52:49.219390 kernel: efi: EFI v2.7 by EDK II Sep 5 23:52:49.219406 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 5 23:52:49.219421 kernel: ACPI: Early table checksum verification disabled Sep 5 23:52:49.219439 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 5 23:52:49.219454 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 5 23:52:49.219470 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 5 23:52:49.219485 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 5 23:52:49.219505 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 5 23:52:49.219520 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 5 23:52:49.219536 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 5 23:52:49.219551 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 5 23:52:49.219570 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 5 23:52:49.219590 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 5 23:52:49.219607 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 5 23:52:49.219623 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 5 23:52:49.219640 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 5 23:52:49.219656 kernel: printk: bootconsole [uart0] enabled Sep 5 23:52:49.219672 kernel: NUMA: Failed to initialise from firmware Sep 5 23:52:49.219689 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 5 23:52:49.219741 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 5 23:52:49.219764 kernel: Zone ranges: Sep 5 23:52:49.219782 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:52:49.219800 kernel: DMA32 empty Sep 5 23:52:49.219825 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 5 23:52:49.219842 kernel: Movable zone start for each node Sep 5 23:52:49.219859 kernel: Early memory node ranges Sep 5 23:52:49.219876 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 5 23:52:49.219894 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 5 23:52:49.219911 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 5 23:52:49.219929 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 5 23:52:49.219946 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 5 23:52:49.219962 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 5 23:52:49.219979 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 5 23:52:49.219996 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 5 23:52:49.220013 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 5 23:52:49.220034 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 5 23:52:49.220052 kernel: psci: probing for conduit method from ACPI. Sep 5 23:52:49.220076 kernel: psci: PSCIv1.0 detected in firmware. Sep 5 23:52:49.220093 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:52:49.220111 kernel: psci: Trusted OS migration not required Sep 5 23:52:49.220134 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:52:49.220152 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 5 23:52:49.220170 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:52:49.220188 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:52:49.220206 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:52:49.220223 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:52:49.220240 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:52:49.220257 kernel: CPU features: detected: Spectre-v2 Sep 5 23:52:49.220275 kernel: CPU features: detected: Spectre-v3a Sep 5 23:52:49.220292 kernel: CPU features: detected: Spectre-BHB Sep 5 23:52:49.220309 kernel: CPU features: detected: ARM erratum 1742098 Sep 5 23:52:49.220331 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 5 23:52:49.220349 kernel: alternatives: applying boot alternatives Sep 5 23:52:49.220370 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:52:49.220389 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:52:49.220407 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:52:49.220425 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:52:49.220444 kernel: Fallback order for Node 0: 0 Sep 5 23:52:49.220463 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 5 23:52:49.220480 kernel: Policy zone: Normal Sep 5 23:52:49.220498 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:52:49.220516 kernel: software IO TLB: area num 2. Sep 5 23:52:49.220539 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 5 23:52:49.220558 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Sep 5 23:52:49.220577 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:52:49.220595 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:52:49.220613 kernel: rcu: RCU event tracing is enabled. Sep 5 23:52:49.220632 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:52:49.220650 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:52:49.220667 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:52:49.220685 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:52:49.220757 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:52:49.220784 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:52:49.220809 kernel: GICv3: 96 SPIs implemented Sep 5 23:52:49.220826 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:52:49.220844 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:52:49.220861 kernel: GICv3: GICv3 features: 16 PPIs Sep 5 23:52:49.220878 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 5 23:52:49.220896 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 5 23:52:49.220913 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:52:49.220931 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:52:49.220949 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 5 23:52:49.220966 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 5 23:52:49.220984 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 5 23:52:49.221001 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:52:49.221022 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 5 23:52:49.221040 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 5 23:52:49.221058 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 5 23:52:49.221075 kernel: Console: colour dummy device 80x25 Sep 5 23:52:49.221094 kernel: printk: console [tty1] enabled Sep 5 23:52:49.221111 kernel: ACPI: Core revision 20230628 Sep 5 23:52:49.221129 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 5 23:52:49.221147 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:52:49.221165 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:52:49.221187 kernel: landlock: Up and running. Sep 5 23:52:49.221205 kernel: SELinux: Initializing. Sep 5 23:52:49.221223 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:52:49.221240 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:52:49.221258 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:52:49.221276 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:52:49.221294 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:52:49.221312 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:52:49.221330 kernel: Platform MSI: ITS@0x10080000 domain created Sep 5 23:52:49.221351 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 5 23:52:49.221369 kernel: Remapping and enabling EFI services. Sep 5 23:52:49.221386 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:52:49.221404 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:52:49.221422 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 5 23:52:49.221440 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 5 23:52:49.221458 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 5 23:52:49.221476 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:52:49.221493 kernel: SMP: Total of 2 processors activated. Sep 5 23:52:49.221511 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:52:49.221532 kernel: CPU features: detected: 32-bit EL1 Support Sep 5 23:52:49.221550 kernel: CPU features: detected: CRC32 instructions Sep 5 23:52:49.221578 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:52:49.221601 kernel: alternatives: applying system-wide alternatives Sep 5 23:52:49.221619 kernel: devtmpfs: initialized Sep 5 23:52:49.221638 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:52:49.221656 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:52:49.221675 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:52:49.221693 kernel: SMBIOS 3.0.0 present. Sep 5 23:52:49.221858 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 5 23:52:49.221879 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:52:49.221897 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:52:49.221916 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:52:49.221935 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:52:49.221953 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:52:49.221972 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Sep 5 23:52:49.221997 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:52:49.222016 kernel: cpuidle: using governor menu Sep 5 23:52:49.222035 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:52:49.222053 kernel: ASID allocator initialised with 65536 entries Sep 5 23:52:49.222071 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:52:49.222090 kernel: Serial: AMBA PL011 UART driver Sep 5 23:52:49.222108 kernel: Modules: 17488 pages in range for non-PLT usage Sep 5 23:52:49.222126 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:52:49.222145 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:52:49.222167 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:52:49.222186 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:52:49.222204 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:52:49.222222 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:52:49.222241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:52:49.222259 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:52:49.222278 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:52:49.222296 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:52:49.222314 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:52:49.222336 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:52:49.222355 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:52:49.222373 kernel: ACPI: Interpreter enabled Sep 5 23:52:49.222392 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:52:49.222410 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:52:49.222428 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 5 23:52:49.222808 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:52:49.223034 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:52:49.223257 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:52:49.223461 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 5 23:52:49.223658 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 5 23:52:49.223684 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 5 23:52:49.223733 kernel: acpiphp: Slot [1] registered Sep 5 23:52:49.223757 kernel: acpiphp: Slot [2] registered Sep 5 23:52:49.223776 kernel: acpiphp: Slot [3] registered Sep 5 23:52:49.223794 kernel: acpiphp: Slot [4] registered Sep 5 23:52:49.223821 kernel: acpiphp: Slot [5] registered Sep 5 23:52:49.223839 kernel: acpiphp: Slot [6] registered Sep 5 23:52:49.223857 kernel: acpiphp: Slot [7] registered Sep 5 23:52:49.223876 kernel: acpiphp: Slot [8] registered Sep 5 23:52:49.223894 kernel: acpiphp: Slot [9] registered Sep 5 23:52:49.223912 kernel: acpiphp: Slot [10] registered Sep 5 23:52:49.223930 kernel: acpiphp: Slot [11] registered Sep 5 23:52:49.223948 kernel: acpiphp: Slot [12] registered Sep 5 23:52:49.223966 kernel: acpiphp: Slot [13] registered Sep 5 23:52:49.223984 kernel: acpiphp: Slot [14] registered Sep 5 23:52:49.224007 kernel: acpiphp: Slot [15] registered Sep 5 23:52:49.224025 kernel: acpiphp: Slot [16] registered Sep 5 23:52:49.224043 kernel: acpiphp: Slot [17] registered Sep 5 23:52:49.224061 kernel: acpiphp: Slot [18] registered Sep 5 23:52:49.224079 kernel: acpiphp: Slot [19] registered Sep 5 23:52:49.224097 kernel: acpiphp: Slot [20] registered Sep 5 23:52:49.224116 kernel: acpiphp: Slot [21] registered Sep 5 23:52:49.224134 kernel: acpiphp: Slot [22] registered Sep 5 23:52:49.224152 kernel: acpiphp: Slot [23] registered Sep 5 23:52:49.224173 kernel: acpiphp: Slot [24] registered Sep 5 23:52:49.224192 kernel: acpiphp: Slot [25] registered Sep 5 23:52:49.224210 kernel: acpiphp: Slot [26] registered Sep 5 23:52:49.224228 kernel: acpiphp: Slot [27] registered Sep 5 23:52:49.224246 kernel: acpiphp: Slot [28] registered Sep 5 23:52:49.224264 kernel: acpiphp: Slot [29] registered Sep 5 23:52:49.224282 kernel: acpiphp: Slot [30] registered Sep 5 23:52:49.224300 kernel: acpiphp: Slot [31] registered Sep 5 23:52:49.224318 kernel: PCI host bridge to bus 0000:00 Sep 5 23:52:49.224537 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 5 23:52:49.224753 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:52:49.224945 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 5 23:52:49.225127 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 5 23:52:49.225370 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 5 23:52:49.225596 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 5 23:52:49.225841 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 5 23:52:49.226076 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 5 23:52:49.226288 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 5 23:52:49.226498 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 23:52:49.226801 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 5 23:52:49.227023 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 5 23:52:49.228947 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 5 23:52:49.229179 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 5 23:52:49.229380 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 23:52:49.229578 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 5 23:52:49.229825 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 5 23:52:49.230044 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 5 23:52:49.230249 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 5 23:52:49.230457 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 5 23:52:49.230673 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 5 23:52:49.230911 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:52:49.231096 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 5 23:52:49.231122 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:52:49.231141 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:52:49.231160 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:52:49.231179 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:52:49.231197 kernel: iommu: Default domain type: Translated Sep 5 23:52:49.231216 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:52:49.231241 kernel: efivars: Registered efivars operations Sep 5 23:52:49.231260 kernel: vgaarb: loaded Sep 5 23:52:49.231279 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:52:49.231297 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:52:49.231316 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:52:49.231334 kernel: pnp: PnP ACPI init Sep 5 23:52:49.231552 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 5 23:52:49.231582 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:52:49.231631 kernel: NET: Registered PF_INET protocol family Sep 5 23:52:49.231671 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:52:49.232822 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:52:49.232852 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:52:49.232871 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:52:49.232891 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:52:49.232910 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:52:49.232929 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:52:49.232947 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:52:49.232975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:52:49.232993 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:52:49.233012 kernel: kvm [1]: HYP mode not available Sep 5 23:52:49.233030 kernel: Initialise system trusted keyrings Sep 5 23:52:49.233049 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:52:49.233068 kernel: Key type asymmetric registered Sep 5 23:52:49.233086 kernel: Asymmetric key parser 'x509' registered Sep 5 23:52:49.233104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:52:49.233123 kernel: io scheduler mq-deadline registered Sep 5 23:52:49.233145 kernel: io scheduler kyber registered Sep 5 23:52:49.233164 kernel: io scheduler bfq registered Sep 5 23:52:49.233425 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 5 23:52:49.233453 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:52:49.233472 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:52:49.233491 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 5 23:52:49.233510 kernel: ACPI: button: Sleep Button [SLPB] Sep 5 23:52:49.233528 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:52:49.233552 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:52:49.233783 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 5 23:52:49.233812 kernel: printk: console [ttyS0] disabled Sep 5 23:52:49.233831 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 5 23:52:49.233850 kernel: printk: console [ttyS0] enabled Sep 5 23:52:49.233868 kernel: printk: bootconsole [uart0] disabled Sep 5 23:52:49.233887 kernel: thunder_xcv, ver 1.0 Sep 5 23:52:49.233905 kernel: thunder_bgx, ver 1.0 Sep 5 23:52:49.233923 kernel: nicpf, ver 1.0 Sep 5 23:52:49.233948 kernel: nicvf, ver 1.0 Sep 5 23:52:49.234177 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:52:49.234508 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:52:48 UTC (1757116368) Sep 5 23:52:49.234537 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:52:49.234556 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 5 23:52:49.234575 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:52:49.234615 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:52:49.234635 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:52:49.234661 kernel: Segment Routing with IPv6 Sep 5 23:52:49.234680 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:52:49.234698 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:52:49.234830 kernel: Key type dns_resolver registered Sep 5 23:52:49.234850 kernel: registered taskstats version 1 Sep 5 23:52:49.234868 kernel: Loading compiled-in X.509 certificates Sep 5 23:52:49.234887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:52:49.234905 kernel: Key type .fscrypt registered Sep 5 23:52:49.234923 kernel: Key type fscrypt-provisioning registered Sep 5 23:52:49.234948 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:52:49.234967 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:52:49.234985 kernel: ima: No architecture policies found Sep 5 23:52:49.235003 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:52:49.235022 kernel: clk: Disabling unused clocks Sep 5 23:52:49.235040 kernel: Freeing unused kernel memory: 39424K Sep 5 23:52:49.235059 kernel: Run /init as init process Sep 5 23:52:49.235077 kernel: with arguments: Sep 5 23:52:49.235095 kernel: /init Sep 5 23:52:49.235113 kernel: with environment: Sep 5 23:52:49.235135 kernel: HOME=/ Sep 5 23:52:49.235153 kernel: TERM=linux Sep 5 23:52:49.235172 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:52:49.235194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:52:49.235217 systemd[1]: Detected virtualization amazon. Sep 5 23:52:49.235237 systemd[1]: Detected architecture arm64. Sep 5 23:52:49.235257 systemd[1]: Running in initrd. Sep 5 23:52:49.235281 systemd[1]: No hostname configured, using default hostname. Sep 5 23:52:49.235301 systemd[1]: Hostname set to . Sep 5 23:52:49.235321 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:52:49.235341 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:52:49.235361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:52:49.235381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:52:49.235402 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:52:49.235423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:52:49.235447 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:52:49.235468 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:52:49.235491 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:52:49.235511 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:52:49.235531 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:52:49.235551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:52:49.235571 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:52:49.235596 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:52:49.235616 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:52:49.235636 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:52:49.235656 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:52:49.235676 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:52:49.235696 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:52:49.235738 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:52:49.235759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:52:49.235779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:52:49.235806 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:52:49.235827 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:52:49.235847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:52:49.235868 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:52:49.235888 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:52:49.235908 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:52:49.235928 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:52:49.235948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:52:49.235972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:49.235992 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:52:49.236013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:52:49.236032 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:52:49.236054 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:52:49.236112 systemd-journald[251]: Collecting audit messages is disabled. Sep 5 23:52:49.236154 systemd-journald[251]: Journal started Sep 5 23:52:49.236196 systemd-journald[251]: Runtime Journal (/run/log/journal/ec25098d3c7a5801a89eac7b4b78fdf5) is 8.0M, max 75.3M, 67.3M free. Sep 5 23:52:49.232416 systemd-modules-load[252]: Inserted module 'overlay' Sep 5 23:52:49.244754 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:52:49.258025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:52:49.261073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:49.272149 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:52:49.278754 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:52:49.285468 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 5 23:52:49.288310 kernel: Bridge firewalling registered Sep 5 23:52:49.289223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:52:49.304210 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:49.312841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:52:49.331693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:52:49.332418 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:52:49.348801 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:49.362107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:52:49.373423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:52:49.389043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:52:49.407073 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:52:49.416969 dracut-cmdline[284]: dracut-dracut-053 Sep 5 23:52:49.424088 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:52:49.492206 systemd-resolved[290]: Positive Trust Anchors: Sep 5 23:52:49.492243 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:52:49.492307 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:52:49.575743 kernel: SCSI subsystem initialized Sep 5 23:52:49.582730 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:52:49.596140 kernel: iscsi: registered transport (tcp) Sep 5 23:52:49.618118 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:52:49.618190 kernel: QLogic iSCSI HBA Driver Sep 5 23:52:49.722728 kernel: random: crng init done Sep 5 23:52:49.722997 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 5 23:52:49.725200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:52:49.734772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:52:49.755357 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:52:49.767158 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:52:49.801848 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:52:49.801922 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:52:49.803838 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:52:49.869765 kernel: raid6: neonx8 gen() 6678 MB/s Sep 5 23:52:49.886770 kernel: raid6: neonx4 gen() 6408 MB/s Sep 5 23:52:49.903755 kernel: raid6: neonx2 gen() 5425 MB/s Sep 5 23:52:49.921753 kernel: raid6: neonx1 gen() 3934 MB/s Sep 5 23:52:49.937739 kernel: raid6: int64x8 gen() 3830 MB/s Sep 5 23:52:49.954738 kernel: raid6: int64x4 gen() 3713 MB/s Sep 5 23:52:49.971740 kernel: raid6: int64x2 gen() 3607 MB/s Sep 5 23:52:49.989709 kernel: raid6: int64x1 gen() 2767 MB/s Sep 5 23:52:49.989742 kernel: raid6: using algorithm neonx8 gen() 6678 MB/s Sep 5 23:52:50.007740 kernel: raid6: .... xor() 4869 MB/s, rmw enabled Sep 5 23:52:50.007783 kernel: raid6: using neon recovery algorithm Sep 5 23:52:50.015741 kernel: xor: measuring software checksum speed Sep 5 23:52:50.017945 kernel: 8regs : 10279 MB/sec Sep 5 23:52:50.017983 kernel: 32regs : 11997 MB/sec Sep 5 23:52:50.019245 kernel: arm64_neon : 9245 MB/sec Sep 5 23:52:50.019278 kernel: xor: using function: 32regs (11997 MB/sec) Sep 5 23:52:50.105751 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:52:50.126800 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:52:50.142987 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:52:50.176966 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 5 23:52:50.186803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:52:50.199394 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:52:50.237271 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Sep 5 23:52:50.295919 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:52:50.307032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:52:50.431843 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:52:50.445863 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:52:50.496145 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:52:50.502441 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:52:50.508351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:52:50.511473 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:52:50.525078 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:52:50.574548 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:52:50.646999 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:52:50.647064 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 5 23:52:50.652361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:52:50.654841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:50.660780 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:50.667341 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 5 23:52:50.667644 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 5 23:52:50.663260 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:52:50.663513 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:50.675446 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:50.683731 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:e9:de:4f:09:d7 Sep 5 23:52:50.686402 (udev-worker)[514]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:52:50.694901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:50.715283 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:52:50.715351 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 5 23:52:50.726758 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 5 23:52:50.735010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:50.739787 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:52:50.739825 kernel: GPT:9289727 != 16777215 Sep 5 23:52:50.745468 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:52:50.745502 kernel: GPT:9289727 != 16777215 Sep 5 23:52:50.745528 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:52:50.747801 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:50.751996 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:50.791411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:50.838762 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (526) Sep 5 23:52:50.849794 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (522) Sep 5 23:52:50.936411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 5 23:52:50.963644 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 5 23:52:50.981551 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 5 23:52:51.006764 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 5 23:52:51.013442 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 5 23:52:51.027087 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:52:51.041855 disk-uuid[661]: Primary Header is updated. Sep 5 23:52:51.041855 disk-uuid[661]: Secondary Entries is updated. Sep 5 23:52:51.041855 disk-uuid[661]: Secondary Header is updated. Sep 5 23:52:51.054728 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:51.060736 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:51.070774 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:52.072734 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:52.074349 disk-uuid[662]: The operation has completed successfully. Sep 5 23:52:52.270189 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:52:52.270409 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:52:52.318023 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:52:52.339611 sh[1005]: Success Sep 5 23:52:52.367757 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:52:52.489572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:52:52.501900 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:52:52.507199 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:52:52.563373 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:52:52.563449 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:52.565376 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:52:52.566823 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:52:52.567987 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:52:52.605741 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:52:52.619407 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:52:52.623924 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:52:52.637947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:52:52.647395 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:52:52.675764 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:52.675842 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:52.675874 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:52:52.696826 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:52:52.716059 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:52:52.719418 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:52.728514 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:52:52.740223 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:52:52.871271 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:52:52.898575 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:52:52.933313 ignition[1122]: Ignition 2.19.0 Sep 5 23:52:52.934651 ignition[1122]: Stage: fetch-offline Sep 5 23:52:52.936580 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:52.936604 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:52.943041 ignition[1122]: Ignition finished successfully Sep 5 23:52:52.946221 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:52:52.965624 systemd-networkd[1205]: lo: Link UP Sep 5 23:52:52.966088 systemd-networkd[1205]: lo: Gained carrier Sep 5 23:52:52.969212 systemd-networkd[1205]: Enumeration completed Sep 5 23:52:52.970176 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:52.970183 systemd-networkd[1205]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:52:52.970826 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:52:52.975370 systemd[1]: Reached target network.target - Network. Sep 5 23:52:52.980210 systemd-networkd[1205]: eth0: Link UP Sep 5 23:52:52.980218 systemd-networkd[1205]: eth0: Gained carrier Sep 5 23:52:52.980235 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:53.005091 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:52:53.019839 systemd-networkd[1205]: eth0: DHCPv4 address 172.31.22.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 5 23:52:53.036136 ignition[1208]: Ignition 2.19.0 Sep 5 23:52:53.036646 ignition[1208]: Stage: fetch Sep 5 23:52:53.038564 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:53.038614 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:53.038848 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:53.056988 ignition[1208]: PUT result: OK Sep 5 23:52:53.060658 ignition[1208]: parsed url from cmdline: "" Sep 5 23:52:53.060683 ignition[1208]: no config URL provided Sep 5 23:52:53.060981 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:52:53.061014 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:52:53.061060 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:53.069854 ignition[1208]: PUT result: OK Sep 5 23:52:53.070007 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 5 23:52:53.071793 ignition[1208]: GET result: OK Sep 5 23:52:53.071961 ignition[1208]: parsing config with SHA512: 1ed8ab089ddbfa6c5a41c80cbf9cffe54239adb64484ab3d6fc7ad30aca8ba652f765f7607d89480146e8cb0b8f222deb4701ec6be0d033f312d6fece7c61b95 Sep 5 23:52:53.082576 unknown[1208]: fetched base config from "system" Sep 5 23:52:53.082606 unknown[1208]: fetched base config from "system" Sep 5 23:52:53.082623 unknown[1208]: fetched user config from "aws" Sep 5 23:52:53.088345 ignition[1208]: fetch: fetch complete Sep 5 23:52:53.088372 ignition[1208]: fetch: fetch passed Sep 5 23:52:53.088925 ignition[1208]: Ignition finished successfully Sep 5 23:52:53.096162 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:52:53.109152 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:52:53.135814 ignition[1216]: Ignition 2.19.0 Sep 5 23:52:53.135842 ignition[1216]: Stage: kargs Sep 5 23:52:53.137718 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:53.137748 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:53.138995 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:53.143849 ignition[1216]: PUT result: OK Sep 5 23:52:53.150479 ignition[1216]: kargs: kargs passed Sep 5 23:52:53.150613 ignition[1216]: Ignition finished successfully Sep 5 23:52:53.156823 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:52:53.170001 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:52:53.193442 ignition[1222]: Ignition 2.19.0 Sep 5 23:52:53.193469 ignition[1222]: Stage: disks Sep 5 23:52:53.195244 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:53.195271 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:53.195438 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:53.197508 ignition[1222]: PUT result: OK Sep 5 23:52:53.209913 ignition[1222]: disks: disks passed Sep 5 23:52:53.210008 ignition[1222]: Ignition finished successfully Sep 5 23:52:53.214618 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:52:53.217759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:52:53.220352 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:52:53.223116 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:52:53.227438 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:52:53.232068 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:52:53.245969 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:52:53.296324 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 23:52:53.304002 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:52:53.313942 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:52:53.412999 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:52:53.414108 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:52:53.416942 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:52:53.434905 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:52:53.442101 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:52:53.447602 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 23:52:53.454997 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:52:53.461202 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:52:53.481801 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:52:53.493157 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:52:53.504754 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1249) Sep 5 23:52:53.508639 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:53.508735 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:53.511983 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:52:53.527763 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:52:53.530237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:52:53.602225 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:52:53.611634 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:52:53.620252 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:52:53.629208 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:52:53.785858 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:52:53.796065 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:52:53.805144 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:52:53.823465 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:52:53.827975 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:53.859921 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:52:53.877301 ignition[1364]: INFO : Ignition 2.19.0 Sep 5 23:52:53.877301 ignition[1364]: INFO : Stage: mount Sep 5 23:52:53.877301 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:53.877301 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:53.877301 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:53.888905 ignition[1364]: INFO : PUT result: OK Sep 5 23:52:53.893747 ignition[1364]: INFO : mount: mount passed Sep 5 23:52:53.895498 ignition[1364]: INFO : Ignition finished successfully Sep 5 23:52:53.899539 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:52:53.910009 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:52:53.924197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:52:53.955585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1376) Sep 5 23:52:53.955646 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:53.955684 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:53.958497 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:52:53.963750 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:52:53.967738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:52:54.001918 ignition[1393]: INFO : Ignition 2.19.0 Sep 5 23:52:54.001918 ignition[1393]: INFO : Stage: files Sep 5 23:52:54.006750 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:54.006750 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:54.006750 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:54.014194 ignition[1393]: INFO : PUT result: OK Sep 5 23:52:54.019150 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:52:54.023509 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:52:54.023509 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:52:54.034875 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:52:54.038007 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:52:54.041221 unknown[1393]: wrote ssh authorized keys file for user: core Sep 5 23:52:54.043960 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:52:54.046791 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:52:54.046791 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:52:54.046791 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:52:54.046791 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 23:52:54.160414 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 23:52:54.790960 systemd-networkd[1205]: eth0: Gained IPv6LL Sep 5 23:52:54.936020 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:52:54.940287 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:52:54.944490 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:52:54.948403 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:52:54.952372 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:54.956188 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 23:52:55.284726 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 23:52:55.678075 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:55.678075 ignition[1393]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:52:55.686146 ignition[1393]: INFO : files: files passed Sep 5 23:52:55.686146 ignition[1393]: INFO : Ignition finished successfully Sep 5 23:52:55.731443 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:52:55.739985 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:52:55.744376 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:52:55.778843 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:52:55.781540 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:52:55.798605 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:52:55.798605 initrd-setup-root-after-ignition[1422]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:52:55.808191 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:52:55.804766 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:52:55.811667 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:52:55.829205 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:52:55.900995 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:52:55.903416 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:52:55.909266 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:52:55.913745 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:52:55.918320 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:52:55.929056 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:52:55.968797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:52:55.987811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:52:56.013390 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:52:56.017637 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:52:56.024842 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:52:56.025201 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:52:56.025456 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:52:56.034358 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:52:56.049333 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:52:56.051811 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:52:56.054537 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:52:56.059636 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:52:56.062657 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:52:56.063495 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:52:56.064751 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:52:56.065849 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:52:56.071462 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:52:56.072209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:52:56.072466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:52:56.073692 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:52:56.074193 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:52:56.074426 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:52:56.080304 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:52:56.080574 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:52:56.080892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:52:56.082072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:52:56.082320 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:52:56.087857 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:52:56.088122 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:52:56.115644 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:52:56.126841 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:52:56.127442 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:52:56.158384 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:52:56.164877 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:52:56.169123 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:52:56.175289 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:52:56.177671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:52:56.194314 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:52:56.196817 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:52:56.221219 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:52:56.225593 ignition[1446]: INFO : Ignition 2.19.0 Sep 5 23:52:56.225593 ignition[1446]: INFO : Stage: umount Sep 5 23:52:56.230177 ignition[1446]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:56.230177 ignition[1446]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:56.230177 ignition[1446]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:56.241462 ignition[1446]: INFO : PUT result: OK Sep 5 23:52:56.247597 ignition[1446]: INFO : umount: umount passed Sep 5 23:52:56.247597 ignition[1446]: INFO : Ignition finished successfully Sep 5 23:52:56.250143 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:52:56.252604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:52:56.255953 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:52:56.257774 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:52:56.262406 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:52:56.262616 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:52:56.265895 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:52:56.266001 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:52:56.269792 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:52:56.269903 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:52:56.272675 systemd[1]: Stopped target network.target - Network. Sep 5 23:52:56.276154 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:52:56.276274 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:52:56.281065 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:52:56.284990 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:52:56.285122 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:52:56.285684 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:52:56.286423 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:52:56.289463 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:52:56.289551 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:52:56.289811 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:52:56.289882 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:52:56.290121 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:52:56.290210 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:52:56.290501 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:52:56.294545 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:52:56.317033 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:52:56.317144 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:52:56.320049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:52:56.323816 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:52:56.334814 systemd-networkd[1205]: eth0: DHCPv6 lease lost Sep 5 23:52:56.364959 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:52:56.365273 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:52:56.374037 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:52:56.374310 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:52:56.385015 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:52:56.391920 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:52:56.392230 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:52:56.397557 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:52:56.407559 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:52:56.407884 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:52:56.428456 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:52:56.430911 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:52:56.437005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:52:56.437126 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:52:56.439675 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:52:56.439820 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:52:56.458255 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:52:56.462849 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:52:56.468201 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:52:56.468376 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:52:56.476694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:52:56.476858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:52:56.479526 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:52:56.479648 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:52:56.490327 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:52:56.490449 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:52:56.498402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:52:56.498545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:56.513001 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:52:56.515691 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:52:56.515840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:52:56.527928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:52:56.531004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:56.542317 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:52:56.543397 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:52:56.560471 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:52:56.561099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:52:56.568589 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:52:56.577004 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:52:56.610662 systemd[1]: Switching root. Sep 5 23:52:56.651503 systemd-journald[251]: Journal stopped Sep 5 23:52:58.677533 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 5 23:52:58.677689 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:52:58.680079 kernel: SELinux: policy capability open_perms=1 Sep 5 23:52:58.680125 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:52:58.680157 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:52:58.680195 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:52:58.680226 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:52:58.680257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:52:58.680287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:52:58.680316 kernel: audit: type=1403 audit(1757116377.021:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:52:58.680356 systemd[1]: Successfully loaded SELinux policy in 51.880ms. Sep 5 23:52:58.680409 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.189ms. Sep 5 23:52:58.680444 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:52:58.680477 systemd[1]: Detected virtualization amazon. Sep 5 23:52:58.680511 systemd[1]: Detected architecture arm64. Sep 5 23:52:58.680544 systemd[1]: Detected first boot. Sep 5 23:52:58.680575 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:52:58.680607 zram_generator::config[1510]: No configuration found. Sep 5 23:52:58.680650 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:52:58.680683 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:52:58.680743 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 5 23:52:58.680785 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:52:58.680823 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:52:58.680857 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:52:58.680890 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:52:58.680923 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:52:58.680957 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:52:58.680990 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:52:58.681019 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:52:58.681050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:52:58.681082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:52:58.681116 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:52:58.681150 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:52:58.681182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:52:58.681216 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:52:58.681247 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 23:52:58.681279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:52:58.681310 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:52:58.681340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:52:58.681371 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:52:58.681407 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:52:58.681438 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:52:58.681468 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:52:58.681508 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:52:58.681541 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:52:58.681574 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:52:58.681604 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:52:58.681633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:52:58.681673 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:52:58.681727 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:52:58.681764 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:52:58.683571 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:52:58.683619 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:52:58.683649 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:52:58.683682 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:52:58.683994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:52:58.684038 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:52:58.684078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:52:58.684108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:52:58.684138 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:52:58.684169 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:52:58.684199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:52:58.684228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:52:58.684258 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:52:58.684289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:52:58.684324 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:52:58.684354 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 5 23:52:58.684386 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 5 23:52:58.684415 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:52:58.684446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:52:58.684475 kernel: loop: module loaded Sep 5 23:52:58.684506 kernel: fuse: init (API version 7.39) Sep 5 23:52:58.684550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:52:58.685128 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:52:58.685176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:52:58.685207 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:52:58.685239 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:52:58.685269 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:52:58.685353 systemd-journald[1610]: Collecting audit messages is disabled. Sep 5 23:52:58.685413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:52:58.685444 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:52:58.685476 kernel: ACPI: bus type drm_connector registered Sep 5 23:52:58.685506 systemd-journald[1610]: Journal started Sep 5 23:52:58.685552 systemd-journald[1610]: Runtime Journal (/run/log/journal/ec25098d3c7a5801a89eac7b4b78fdf5) is 8.0M, max 75.3M, 67.3M free. Sep 5 23:52:58.694912 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:52:58.696778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:52:58.701956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:52:58.706453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:52:58.707389 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:52:58.711657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:52:58.714106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:52:58.717452 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:52:58.718909 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:52:58.721916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:52:58.722260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:52:58.726635 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:52:58.727431 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:52:58.730455 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:52:58.734054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:52:58.738606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:52:58.744672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:52:58.750925 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:52:58.759393 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:52:58.783525 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:52:58.794048 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:52:58.799077 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:52:58.807969 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:52:58.824008 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:52:58.846189 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:52:58.849958 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:52:58.865083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:52:58.869496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:52:58.878182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:52:58.910080 systemd-journald[1610]: Time spent on flushing to /var/log/journal/ec25098d3c7a5801a89eac7b4b78fdf5 is 98.186ms for 891 entries. Sep 5 23:52:58.910080 systemd-journald[1610]: System Journal (/var/log/journal/ec25098d3c7a5801a89eac7b4b78fdf5) is 8.0M, max 195.6M, 187.6M free. Sep 5 23:52:59.038937 systemd-journald[1610]: Received client request to flush runtime journal. Sep 5 23:52:58.906985 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:52:58.917070 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:52:58.923171 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:52:58.970341 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:52:58.980052 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:52:59.046541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:52:59.053664 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:52:59.069951 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Sep 5 23:52:59.069992 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Sep 5 23:52:59.093508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:52:59.108789 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:52:59.112027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:52:59.132004 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:52:59.184975 udevadm[1676]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:52:59.224883 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:52:59.236054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:52:59.280789 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Sep 5 23:52:59.281303 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Sep 5 23:52:59.292114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:52:59.946210 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:52:59.962166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:53:00.017483 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Sep 5 23:53:00.063258 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:53:00.075967 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:53:00.127190 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:53:00.269917 (udev-worker)[1701]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:53:00.271208 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 5 23:53:00.307546 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:53:00.458752 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1689) Sep 5 23:53:00.476269 systemd-networkd[1690]: lo: Link UP Sep 5 23:53:00.476799 systemd-networkd[1690]: lo: Gained carrier Sep 5 23:53:00.479673 systemd-networkd[1690]: Enumeration completed Sep 5 23:53:00.480185 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:53:00.483429 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:53:00.483446 systemd-networkd[1690]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:53:00.497621 systemd-networkd[1690]: eth0: Link UP Sep 5 23:53:00.499885 systemd-networkd[1690]: eth0: Gained carrier Sep 5 23:53:00.500068 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:53:00.512358 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:53:00.524946 systemd-networkd[1690]: eth0: DHCPv4 address 172.31.22.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 5 23:53:00.677251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:53:00.771779 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:53:00.789044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 5 23:53:00.803084 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:53:00.825863 lvm[1811]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:53:00.837605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:53:00.871427 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:53:00.874919 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:53:00.891211 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:53:00.901991 lvm[1818]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:53:00.941318 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:53:00.947648 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:53:00.950606 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:53:00.950665 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:53:00.953034 systemd[1]: Reached target machines.target - Containers. Sep 5 23:53:00.957101 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:53:00.968077 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:53:00.972959 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:53:00.975582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:53:00.977809 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:53:00.983608 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:53:01.004310 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:53:01.010688 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:53:01.045368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:53:01.049111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:53:01.058050 kernel: loop0: detected capacity change from 0 to 114432 Sep 5 23:53:01.063781 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:53:01.102999 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:53:01.129761 kernel: loop1: detected capacity change from 0 to 114328 Sep 5 23:53:01.189767 kernel: loop2: detected capacity change from 0 to 52536 Sep 5 23:53:01.295376 kernel: loop3: detected capacity change from 0 to 203944 Sep 5 23:53:01.345867 kernel: loop4: detected capacity change from 0 to 114432 Sep 5 23:53:01.371950 kernel: loop5: detected capacity change from 0 to 114328 Sep 5 23:53:01.390752 kernel: loop6: detected capacity change from 0 to 52536 Sep 5 23:53:01.416739 kernel: loop7: detected capacity change from 0 to 203944 Sep 5 23:53:01.454130 (sd-merge)[1839]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 5 23:53:01.455278 (sd-merge)[1839]: Merged extensions into '/usr'. Sep 5 23:53:01.464790 systemd[1]: Reloading requested from client PID 1826 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:53:01.464826 systemd[1]: Reloading... Sep 5 23:53:01.627752 zram_generator::config[1870]: No configuration found. Sep 5 23:53:01.759481 ldconfig[1822]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:53:01.925298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:53:02.075851 systemd[1]: Reloading finished in 610 ms. Sep 5 23:53:02.099799 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:53:02.103454 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:53:02.121000 systemd[1]: Starting ensure-sysext.service... Sep 5 23:53:02.132083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:53:02.147100 systemd[1]: Reloading requested from client PID 1926 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:53:02.147138 systemd[1]: Reloading... Sep 5 23:53:02.154837 systemd-networkd[1690]: eth0: Gained IPv6LL Sep 5 23:53:02.188627 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:53:02.190494 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:53:02.192400 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:53:02.193600 systemd-tmpfiles[1927]: ACLs are not supported, ignoring. Sep 5 23:53:02.194491 systemd-tmpfiles[1927]: ACLs are not supported, ignoring. Sep 5 23:53:02.203158 systemd-tmpfiles[1927]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:53:02.203184 systemd-tmpfiles[1927]: Skipping /boot Sep 5 23:53:02.225639 systemd-tmpfiles[1927]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:53:02.225676 systemd-tmpfiles[1927]: Skipping /boot Sep 5 23:53:02.329803 zram_generator::config[1958]: No configuration found. Sep 5 23:53:02.599473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:53:02.759394 systemd[1]: Reloading finished in 611 ms. Sep 5 23:53:02.789334 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:53:02.800988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:53:02.820267 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:53:02.833024 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:53:02.854023 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:53:02.876050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:53:02.884897 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:53:02.912270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:53:02.920197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:53:02.931930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:53:02.955390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:53:02.960523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:53:02.970683 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:53:02.984043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:53:02.984415 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:53:03.004206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:53:03.004651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:53:03.013697 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:53:03.016312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:53:03.035123 augenrules[2046]: No rules Sep 5 23:53:03.042319 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:53:03.060677 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:53:03.070697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:53:03.077394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:53:03.091338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:53:03.102523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:53:03.108022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:53:03.127045 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:53:03.144617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:53:03.146091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:53:03.157403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:53:03.157808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:53:03.176962 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:53:03.179125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:53:03.191434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:53:03.203293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:53:03.224467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:53:03.238244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:53:03.243316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:53:03.243422 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:53:03.262098 systemd[1]: Finished ensure-sysext.service. Sep 5 23:53:03.272299 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:53:03.285969 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:53:03.292889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:53:03.293285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:53:03.305303 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:53:03.305743 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:53:03.323769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:53:03.324002 systemd-resolved[2028]: Positive Trust Anchors: Sep 5 23:53:03.324045 systemd-resolved[2028]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:53:03.324112 systemd-resolved[2028]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:53:03.329027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:53:03.338260 systemd-resolved[2028]: Defaulting to hostname 'linux'. Sep 5 23:53:03.343041 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:53:03.348480 systemd[1]: Reached target network.target - Network. Sep 5 23:53:03.350798 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:53:03.353371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:53:03.356257 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:53:03.356412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:53:03.356502 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:53:03.356559 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:53:03.359383 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:53:03.362343 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:53:03.365601 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:53:03.368349 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:53:03.371399 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:53:03.374362 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:53:03.374420 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:53:03.376631 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:53:03.380558 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:53:03.386274 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:53:03.390688 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:53:03.405852 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:53:03.408645 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:53:03.410919 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:53:03.413361 systemd[1]: System is tainted: cgroupsv1 Sep 5 23:53:03.413459 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:53:03.413510 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:53:03.418924 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:53:03.436108 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:53:03.447025 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:53:03.456635 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:53:03.484033 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:53:03.489337 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:53:03.498515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:03.517934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:53:03.542047 systemd[1]: Started ntpd.service - Network Time Service. Sep 5 23:53:03.555328 jq[2093]: false Sep 5 23:53:03.556946 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:53:03.563579 extend-filesystems[2094]: Found loop4 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found loop5 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found loop6 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found loop7 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1p1 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1p2 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1p3 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found usr Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1p4 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1p6 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1p7 Sep 5 23:53:03.577994 extend-filesystems[2094]: Found nvme0n1p9 Sep 5 23:53:03.577994 extend-filesystems[2094]: Checking size of /dev/nvme0n1p9 Sep 5 23:53:03.617890 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:53:03.625921 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 5 23:53:03.638142 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:53:03.666405 extend-filesystems[2094]: Resized partition /dev/nvme0n1p9 Sep 5 23:53:03.685751 extend-filesystems[2113]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:53:03.683016 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:53:03.707020 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:53:03.708009 dbus-daemon[2092]: [system] SELinux support is enabled Sep 5 23:53:03.720152 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:53:03.729170 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:53:03.753763 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 5 23:53:03.747151 dbus-daemon[2092]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1690 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 5 23:53:03.756347 ntpd[2101]: ntpd 4.2.8p17@1.4004-o Fri Sep 5 21:57:21 UTC 2025 (1): Starting Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: ntpd 4.2.8p17@1.4004-o Fri Sep 5 21:57:21 UTC 2025 (1): Starting Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: ---------------------------------------------------- Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: ntp-4 is maintained by Network Time Foundation, Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: corporation. Support and training for ntp-4 are Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: available at https://www.nwtime.org/support Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: ---------------------------------------------------- Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: proto: precision = 0.108 usec (-23) Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: basedate set to 2025-08-24 Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: gps base set to 2025-08-24 (week 2381) Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Listen and drop on 0 v6wildcard [::]:123 Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Listen normally on 2 lo 127.0.0.1:123 Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Listen normally on 3 eth0 172.31.22.93:123 Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Listen normally on 4 lo [::1]:123 Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Listen normally on 5 eth0 [fe80::4e9:deff:fe4f:9d7%2]:123 Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: Listening on routing socket on fd #22 for interface updates Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:03.784756 ntpd[2101]: 5 Sep 23:53:03 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:03.759879 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:53:03.756414 ntpd[2101]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 5 23:53:03.790620 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:53:03.756434 ntpd[2101]: ---------------------------------------------------- Sep 5 23:53:03.819933 jq[2123]: true Sep 5 23:53:03.816376 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:53:03.756453 ntpd[2101]: ntp-4 is maintained by Network Time Foundation, Sep 5 23:53:03.818163 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:53:03.756472 ntpd[2101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 5 23:53:03.756490 ntpd[2101]: corporation. Support and training for ntp-4 are Sep 5 23:53:03.756509 ntpd[2101]: available at https://www.nwtime.org/support Sep 5 23:53:03.756528 ntpd[2101]: ---------------------------------------------------- Sep 5 23:53:03.769210 ntpd[2101]: proto: precision = 0.108 usec (-23) Sep 5 23:53:03.769644 ntpd[2101]: basedate set to 2025-08-24 Sep 5 23:53:03.769669 ntpd[2101]: gps base set to 2025-08-24 (week 2381) Sep 5 23:53:03.773677 ntpd[2101]: Listen and drop on 0 v6wildcard [::]:123 Sep 5 23:53:03.773774 ntpd[2101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 5 23:53:03.774052 ntpd[2101]: Listen normally on 2 lo 127.0.0.1:123 Sep 5 23:53:03.774128 ntpd[2101]: Listen normally on 3 eth0 172.31.22.93:123 Sep 5 23:53:03.774208 ntpd[2101]: Listen normally on 4 lo [::1]:123 Sep 5 23:53:03.774289 ntpd[2101]: Listen normally on 5 eth0 [fe80::4e9:deff:fe4f:9d7%2]:123 Sep 5 23:53:03.774349 ntpd[2101]: Listening on routing socket on fd #22 for interface updates Sep 5 23:53:03.781262 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:03.781319 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:03.873765 coreos-metadata[2090]: Sep 05 23:53:03.870 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 5 23:53:03.876275 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:53:03.889970 coreos-metadata[2090]: Sep 05 23:53:03.879 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 5 23:53:03.889970 coreos-metadata[2090]: Sep 05 23:53:03.886 INFO Fetch successful Sep 5 23:53:03.889970 coreos-metadata[2090]: Sep 05 23:53:03.886 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 5 23:53:03.889970 coreos-metadata[2090]: Sep 05 23:53:03.887 INFO Fetch successful Sep 5 23:53:03.889970 coreos-metadata[2090]: Sep 05 23:53:03.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 5 23:53:03.890331 coreos-metadata[2090]: Sep 05 23:53:03.890 INFO Fetch successful Sep 5 23:53:03.890331 coreos-metadata[2090]: Sep 05 23:53:03.890 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 5 23:53:03.892654 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:53:03.911996 coreos-metadata[2090]: Sep 05 23:53:03.904 INFO Fetch successful Sep 5 23:53:03.911996 coreos-metadata[2090]: Sep 05 23:53:03.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 5 23:53:03.911996 coreos-metadata[2090]: Sep 05 23:53:03.911 INFO Fetch failed with 404: resource not found Sep 5 23:53:03.911996 coreos-metadata[2090]: Sep 05 23:53:03.911 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 5 23:53:03.899613 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:53:03.900225 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.914 INFO Fetch successful Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.914 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.917 INFO Fetch successful Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.918 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.921 INFO Fetch successful Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.922 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.926 INFO Fetch successful Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.926 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 5 23:53:03.937824 coreos-metadata[2090]: Sep 05 23:53:03.931 INFO Fetch successful Sep 5 23:53:03.967656 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:53:04.015031 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 5 23:53:03.997304 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 23:53:04.042006 jq[2144]: true Sep 5 23:53:04.042388 update_engine[2120]: I20250905 23:53:04.015475 2120 main.cc:92] Flatcar Update Engine starting Sep 5 23:53:04.029990 (ntainerd)[2163]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:53:04.058126 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:53:04.064630 extend-filesystems[2113]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 5 23:53:04.064630 extend-filesystems[2113]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 23:53:04.064630 extend-filesystems[2113]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 5 23:53:04.102106 update_engine[2120]: I20250905 23:53:04.057013 2120 update_check_scheduler.cc:74] Next update check in 10m18s Sep 5 23:53:04.063444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:53:04.110597 tar[2137]: linux-arm64/helm Sep 5 23:53:04.111106 extend-filesystems[2094]: Resized filesystem in /dev/nvme0n1p9 Sep 5 23:53:04.063499 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:53:04.091035 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 5 23:53:04.105751 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:53:04.105793 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:53:04.109551 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:53:04.110082 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:53:04.135690 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:53:04.172960 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:53:04.201425 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:53:04.204167 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:53:04.247169 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:53:04.302237 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 5 23:53:04.319229 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 5 23:53:04.473230 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2187) Sep 5 23:53:04.469918 systemd-logind[2117]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:53:04.469965 systemd-logind[2117]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 5 23:53:04.488760 bash[2233]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:53:04.499377 systemd-logind[2117]: New seat seat0. Sep 5 23:53:04.510775 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:53:04.525634 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:53:04.621825 systemd[1]: Starting sshkeys.service... Sep 5 23:53:04.781760 amazon-ssm-agent[2206]: Initializing new seelog logger Sep 5 23:53:04.781760 amazon-ssm-agent[2206]: New Seelog Logger Creation Complete Sep 5 23:53:04.781760 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.781760 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 processing appconfig overrides Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: 2025-09-05 23:53:04 INFO Proxy environment variables: Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 processing appconfig overrides Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.793432 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 processing appconfig overrides Sep 5 23:53:04.792922 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:53:04.805972 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:53:04.830935 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.837033 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:04.837033 amazon-ssm-agent[2206]: 2025/09/05 23:53:04 processing appconfig overrides Sep 5 23:53:04.894931 amazon-ssm-agent[2206]: 2025-09-05 23:53:04 INFO https_proxy: Sep 5 23:53:04.907779 containerd[2163]: time="2025-09-05T23:53:04.906475357Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:53:04.994483 amazon-ssm-agent[2206]: 2025-09-05 23:53:04 INFO http_proxy: Sep 5 23:53:05.103978 amazon-ssm-agent[2206]: 2025-09-05 23:53:04 INFO no_proxy: Sep 5 23:53:05.112735 coreos-metadata[2286]: Sep 05 23:53:05.110 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 5 23:53:05.116588 coreos-metadata[2286]: Sep 05 23:53:05.114 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 5 23:53:05.117595 coreos-metadata[2286]: Sep 05 23:53:05.117 INFO Fetch successful Sep 5 23:53:05.117595 coreos-metadata[2286]: Sep 05 23:53:05.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 5 23:53:05.123643 coreos-metadata[2286]: Sep 05 23:53:05.122 INFO Fetch successful Sep 5 23:53:05.132857 unknown[2286]: wrote ssh authorized keys file for user: core Sep 5 23:53:05.201950 amazon-ssm-agent[2206]: 2025-09-05 23:53:04 INFO Checking if agent identity type OnPrem can be assumed Sep 5 23:53:05.205791 update-ssh-keys[2315]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:53:05.207386 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:53:05.226267 containerd[2163]: time="2025-09-05T23:53:05.215595563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:05.235656 systemd[1]: Finished sshkeys.service. Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.247942523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.248015063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.248050859Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.248380031Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.248421155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.248544215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.248574359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.248964203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.249002723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.249037439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:05.249740 containerd[2163]: time="2025-09-05T23:53:05.249061703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:05.250281 containerd[2163]: time="2025-09-05T23:53:05.249232547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:05.250281 containerd[2163]: time="2025-09-05T23:53:05.249629231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:05.259610 containerd[2163]: time="2025-09-05T23:53:05.259530527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:05.265958 containerd[2163]: time="2025-09-05T23:53:05.259863587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:53:05.265958 containerd[2163]: time="2025-09-05T23:53:05.263047595Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:53:05.265958 containerd[2163]: time="2025-09-05T23:53:05.265252175Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:53:05.281379 containerd[2163]: time="2025-09-05T23:53:05.281323619Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:53:05.282413 containerd[2163]: time="2025-09-05T23:53:05.281793311Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:53:05.282413 containerd[2163]: time="2025-09-05T23:53:05.281852687Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:53:05.282413 containerd[2163]: time="2025-09-05T23:53:05.281918243Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:53:05.282413 containerd[2163]: time="2025-09-05T23:53:05.281984207Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:53:05.282413 containerd[2163]: time="2025-09-05T23:53:05.282343655Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287089451Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287382227Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287419319Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287451623Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287500619Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287532491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287562251Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287594495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287634311Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.287738 containerd[2163]: time="2025-09-05T23:53:05.287664527Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.292921 containerd[2163]: time="2025-09-05T23:53:05.287695295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.292921 containerd[2163]: time="2025-09-05T23:53:05.291142187Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:53:05.292921 containerd[2163]: time="2025-09-05T23:53:05.292856183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293336 containerd[2163]: time="2025-09-05T23:53:05.293285711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293494151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293577143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293614811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293675903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293738879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293778371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293838623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293879063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.293994 containerd[2163]: time="2025-09-05T23:53:05.293941259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.294831 containerd[2163]: time="2025-09-05T23:53:05.294496739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.294831 containerd[2163]: time="2025-09-05T23:53:05.294774227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.295161 containerd[2163]: time="2025-09-05T23:53:05.295107023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:53:05.301729 containerd[2163]: time="2025-09-05T23:53:05.295283879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.301729 containerd[2163]: time="2025-09-05T23:53:05.297794795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.301729 containerd[2163]: time="2025-09-05T23:53:05.297884519Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:53:05.302263 containerd[2163]: time="2025-09-05T23:53:05.299621063Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:53:05.302729 containerd[2163]: time="2025-09-05T23:53:05.302435027Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:53:05.302729 containerd[2163]: time="2025-09-05T23:53:05.302529935Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:53:05.302729 containerd[2163]: time="2025-09-05T23:53:05.302593943Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:53:05.302729 containerd[2163]: time="2025-09-05T23:53:05.302623415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.302729 containerd[2163]: time="2025-09-05T23:53:05.302683427Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:53:05.303030 amazon-ssm-agent[2206]: 2025-09-05 23:53:04 INFO Checking if agent identity type EC2 can be assumed Sep 5 23:53:05.303263 containerd[2163]: time="2025-09-05T23:53:05.303177371Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:53:05.303480 containerd[2163]: time="2025-09-05T23:53:05.303424547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:53:05.310866 containerd[2163]: time="2025-09-05T23:53:05.310336535Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:53:05.310866 containerd[2163]: time="2025-09-05T23:53:05.310581215Z" level=info msg="Connect containerd service" Sep 5 23:53:05.310866 containerd[2163]: time="2025-09-05T23:53:05.310695419Z" level=info msg="using legacy CRI server" Sep 5 23:53:05.310866 containerd[2163]: time="2025-09-05T23:53:05.310761719Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:53:05.315746 containerd[2163]: time="2025-09-05T23:53:05.311757491Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:53:05.320896 containerd[2163]: time="2025-09-05T23:53:05.320831987Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:53:05.322743 containerd[2163]: time="2025-09-05T23:53:05.321307559Z" level=info msg="Start subscribing containerd event" Sep 5 23:53:05.322743 containerd[2163]: time="2025-09-05T23:53:05.321419423Z" level=info msg="Start recovering state" Sep 5 23:53:05.322743 containerd[2163]: time="2025-09-05T23:53:05.321556607Z" level=info msg="Start event monitor" Sep 5 23:53:05.322743 containerd[2163]: time="2025-09-05T23:53:05.321584111Z" level=info msg="Start snapshots syncer" Sep 5 23:53:05.322743 containerd[2163]: time="2025-09-05T23:53:05.321605807Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:53:05.322743 containerd[2163]: time="2025-09-05T23:53:05.321624311Z" level=info msg="Start streaming server" Sep 5 23:53:05.324032 containerd[2163]: time="2025-09-05T23:53:05.323972807Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:53:05.326152 containerd[2163]: time="2025-09-05T23:53:05.326104403Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:53:05.326420 containerd[2163]: time="2025-09-05T23:53:05.326368775Z" level=info msg="containerd successfully booted in 0.428591s" Sep 5 23:53:05.329512 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:53:05.409812 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO Agent will take identity from EC2 Sep 5 23:53:05.470334 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 5 23:53:05.472841 dbus-daemon[2092]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2172 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 5 23:53:05.470615 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 5 23:53:05.492003 locksmithd[2188]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:53:05.496175 systemd[1]: Starting polkit.service - Authorization Manager... Sep 5 23:53:05.513295 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:53:05.534779 polkitd[2335]: Started polkitd version 121 Sep 5 23:53:05.549073 polkitd[2335]: Loading rules from directory /etc/polkit-1/rules.d Sep 5 23:53:05.549444 polkitd[2335]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 5 23:53:05.550791 polkitd[2335]: Finished loading, compiling and executing 2 rules Sep 5 23:53:05.551974 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 5 23:53:05.552519 systemd[1]: Started polkit.service - Authorization Manager. Sep 5 23:53:05.558567 polkitd[2335]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 5 23:53:05.612049 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:53:05.618005 systemd-hostnamed[2172]: Hostname set to (transient) Sep 5 23:53:05.618007 systemd-resolved[2028]: System hostname changed to 'ip-172-31-22-93'. Sep 5 23:53:05.712828 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:53:05.813741 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 5 23:53:05.911592 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 5 23:53:06.012225 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [amazon-ssm-agent] Starting Core Agent Sep 5 23:53:06.112868 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 5 23:53:06.213199 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [Registrar] Starting registrar module Sep 5 23:53:06.313622 amazon-ssm-agent[2206]: 2025-09-05 23:53:05 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 5 23:53:06.386510 tar[2137]: linux-arm64/LICENSE Sep 5 23:53:06.387737 tar[2137]: linux-arm64/README.md Sep 5 23:53:06.436857 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:53:06.687991 amazon-ssm-agent[2206]: 2025-09-05 23:53:06 INFO [EC2Identity] EC2 registration was successful. Sep 5 23:53:06.734384 amazon-ssm-agent[2206]: 2025-09-05 23:53:06 INFO [CredentialRefresher] credentialRefresher has started Sep 5 23:53:06.736775 amazon-ssm-agent[2206]: 2025-09-05 23:53:06 INFO [CredentialRefresher] Starting credentials refresher loop Sep 5 23:53:06.736775 amazon-ssm-agent[2206]: 2025-09-05 23:53:06 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 5 23:53:06.789043 amazon-ssm-agent[2206]: 2025-09-05 23:53:06 INFO [CredentialRefresher] Next credential rotation will be in 31.2249566484 minutes Sep 5 23:53:06.835233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:06.852507 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:53:07.016767 sshd_keygen[2143]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:53:07.072972 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:53:07.091323 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:53:07.102391 systemd[1]: Started sshd@0-172.31.22.93:22-139.178.68.195:35386.service - OpenSSH per-connection server daemon (139.178.68.195:35386). Sep 5 23:53:07.117073 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:53:07.119084 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:53:07.143284 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:53:07.200350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:53:07.216234 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:53:07.237646 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 23:53:07.247439 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:53:07.251675 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:53:07.255197 systemd[1]: Startup finished in 9.378s (kernel) + 10.285s (userspace) = 19.663s. Sep 5 23:53:07.347365 sshd[2379]: Accepted publickey for core from 139.178.68.195 port 35386 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:07.349467 sshd[2379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:07.367399 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:53:07.377397 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:53:07.385855 systemd-logind[2117]: New session 1 of user core. Sep 5 23:53:07.417648 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:53:07.433415 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:53:07.459254 (systemd)[2398]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:53:07.731147 systemd[2398]: Queued start job for default target default.target. Sep 5 23:53:07.732568 systemd[2398]: Created slice app.slice - User Application Slice. Sep 5 23:53:07.732630 systemd[2398]: Reached target paths.target - Paths. Sep 5 23:53:07.732665 systemd[2398]: Reached target timers.target - Timers. Sep 5 23:53:07.739896 systemd[2398]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:53:07.772891 amazon-ssm-agent[2206]: 2025-09-05 23:53:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 5 23:53:07.776347 systemd[2398]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:53:07.776483 systemd[2398]: Reached target sockets.target - Sockets. Sep 5 23:53:07.776519 systemd[2398]: Reached target basic.target - Basic System. Sep 5 23:53:07.776641 systemd[2398]: Reached target default.target - Main User Target. Sep 5 23:53:07.776748 systemd[2398]: Startup finished in 303ms. Sep 5 23:53:07.780405 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:53:07.794522 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:53:07.874773 amazon-ssm-agent[2206]: 2025-09-05 23:53:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2408) started Sep 5 23:53:07.965237 systemd[1]: Started sshd@1-172.31.22.93:22-139.178.68.195:35398.service - OpenSSH per-connection server daemon (139.178.68.195:35398). Sep 5 23:53:07.977850 amazon-ssm-agent[2206]: 2025-09-05 23:53:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 5 23:53:08.128403 kubelet[2364]: E0905 23:53:08.127468 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:53:08.132323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:53:08.132933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:53:08.193540 sshd[2418]: Accepted publickey for core from 139.178.68.195 port 35398 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:08.197212 sshd[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:08.208169 systemd-logind[2117]: New session 2 of user core. Sep 5 23:53:08.218463 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:53:08.354120 sshd[2418]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:08.362814 systemd-logind[2117]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:53:08.363266 systemd[1]: sshd@1-172.31.22.93:22-139.178.68.195:35398.service: Deactivated successfully. Sep 5 23:53:08.368484 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:53:08.371882 systemd-logind[2117]: Removed session 2. Sep 5 23:53:08.383268 systemd[1]: Started sshd@2-172.31.22.93:22-139.178.68.195:35402.service - OpenSSH per-connection server daemon (139.178.68.195:35402). Sep 5 23:53:08.564521 sshd[2431]: Accepted publickey for core from 139.178.68.195 port 35402 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:08.567604 sshd[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:08.578316 systemd-logind[2117]: New session 3 of user core. Sep 5 23:53:08.588468 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:53:08.713251 sshd[2431]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:08.720652 systemd[1]: sshd@2-172.31.22.93:22-139.178.68.195:35402.service: Deactivated successfully. Sep 5 23:53:08.721059 systemd-logind[2117]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:53:08.729819 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:53:08.731201 systemd-logind[2117]: Removed session 3. Sep 5 23:53:08.745436 systemd[1]: Started sshd@3-172.31.22.93:22-139.178.68.195:35414.service - OpenSSH per-connection server daemon (139.178.68.195:35414). Sep 5 23:53:08.917805 sshd[2439]: Accepted publickey for core from 139.178.68.195 port 35414 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:08.920804 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:08.931355 systemd-logind[2117]: New session 4 of user core. Sep 5 23:53:08.942508 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:53:09.076133 sshd[2439]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:09.084468 systemd[1]: sshd@3-172.31.22.93:22-139.178.68.195:35414.service: Deactivated successfully. Sep 5 23:53:09.086191 systemd-logind[2117]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:53:09.091769 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:53:09.094263 systemd-logind[2117]: Removed session 4. Sep 5 23:53:09.106230 systemd[1]: Started sshd@4-172.31.22.93:22-139.178.68.195:35426.service - OpenSSH per-connection server daemon (139.178.68.195:35426). Sep 5 23:53:09.290379 sshd[2447]: Accepted publickey for core from 139.178.68.195 port 35426 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:09.293397 sshd[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:09.303946 systemd-logind[2117]: New session 5 of user core. Sep 5 23:53:09.315334 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:53:09.443635 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:53:09.444437 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:09.465015 sudo[2451]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:09.490294 sshd[2447]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:09.496405 systemd[1]: sshd@4-172.31.22.93:22-139.178.68.195:35426.service: Deactivated successfully. Sep 5 23:53:09.504736 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:53:09.507644 systemd-logind[2117]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:53:09.521358 systemd[1]: Started sshd@5-172.31.22.93:22-139.178.68.195:35438.service - OpenSSH per-connection server daemon (139.178.68.195:35438). Sep 5 23:53:09.523828 systemd-logind[2117]: Removed session 5. Sep 5 23:53:09.706157 sshd[2456]: Accepted publickey for core from 139.178.68.195 port 35438 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:09.709089 sshd[2456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:09.719122 systemd-logind[2117]: New session 6 of user core. Sep 5 23:53:09.727427 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:53:09.836999 sudo[2461]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:53:09.837914 sudo[2461]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:09.845234 sudo[2461]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:09.856937 sudo[2460]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:53:09.858310 sudo[2460]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:09.892263 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:53:09.896784 auditctl[2464]: No rules Sep 5 23:53:09.898923 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:53:09.900356 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:53:09.909433 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:53:09.965823 augenrules[2483]: No rules Sep 5 23:53:09.970336 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:53:09.975955 sudo[2460]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:09.999107 sshd[2456]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:10.006687 systemd[1]: sshd@5-172.31.22.93:22-139.178.68.195:35438.service: Deactivated successfully. Sep 5 23:53:10.007256 systemd-logind[2117]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:53:10.014278 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:53:10.020820 systemd-logind[2117]: Removed session 6. Sep 5 23:53:10.028280 systemd[1]: Started sshd@6-172.31.22.93:22-139.178.68.195:35446.service - OpenSSH per-connection server daemon (139.178.68.195:35446). Sep 5 23:53:10.206317 sshd[2492]: Accepted publickey for core from 139.178.68.195 port 35446 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:10.209255 sshd[2492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:10.219090 systemd-logind[2117]: New session 7 of user core. Sep 5 23:53:10.227360 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:53:10.335435 sudo[2497]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:53:10.336218 sudo[2497]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:10.407297 systemd-resolved[2028]: Clock change detected. Flushing caches. Sep 5 23:53:10.503423 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:53:10.520301 (dockerd)[2512]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:53:10.944603 dockerd[2512]: time="2025-09-05T23:53:10.944504730Z" level=info msg="Starting up" Sep 5 23:53:11.296061 dockerd[2512]: time="2025-09-05T23:53:11.295319464Z" level=info msg="Loading containers: start." Sep 5 23:53:11.457753 kernel: Initializing XFRM netlink socket Sep 5 23:53:11.497934 (udev-worker)[2534]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:53:11.580903 systemd-networkd[1690]: docker0: Link UP Sep 5 23:53:11.606529 dockerd[2512]: time="2025-09-05T23:53:11.606273101Z" level=info msg="Loading containers: done." Sep 5 23:53:11.638680 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3318440554-merged.mount: Deactivated successfully. Sep 5 23:53:11.645515 dockerd[2512]: time="2025-09-05T23:53:11.645100530Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:53:11.645515 dockerd[2512]: time="2025-09-05T23:53:11.645244626Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:53:11.645730 dockerd[2512]: time="2025-09-05T23:53:11.645442602Z" level=info msg="Daemon has completed initialization" Sep 5 23:53:11.712842 dockerd[2512]: time="2025-09-05T23:53:11.712601274Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:53:11.714491 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:53:12.833455 containerd[2163]: time="2025-09-05T23:53:12.833362352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 23:53:13.506263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243118627.mount: Deactivated successfully. Sep 5 23:53:14.928412 containerd[2163]: time="2025-09-05T23:53:14.928330426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:14.931728 containerd[2163]: time="2025-09-05T23:53:14.931166806Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652441" Sep 5 23:53:14.934011 containerd[2163]: time="2025-09-05T23:53:14.933913318Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:14.944823 containerd[2163]: time="2025-09-05T23:53:14.944728918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:14.947142 containerd[2163]: time="2025-09-05T23:53:14.946750750Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 2.113305886s" Sep 5 23:53:14.947142 containerd[2163]: time="2025-09-05T23:53:14.946811134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 23:53:14.949511 containerd[2163]: time="2025-09-05T23:53:14.949424458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 23:53:16.473501 containerd[2163]: time="2025-09-05T23:53:16.473353606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:16.476147 containerd[2163]: time="2025-09-05T23:53:16.475196374Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460309" Sep 5 23:53:16.476147 containerd[2163]: time="2025-09-05T23:53:16.476064046Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:16.481815 containerd[2163]: time="2025-09-05T23:53:16.481727458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:16.484138 containerd[2163]: time="2025-09-05T23:53:16.484074070Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.534503392s" Sep 5 23:53:16.484398 containerd[2163]: time="2025-09-05T23:53:16.484277458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 23:53:16.486391 containerd[2163]: time="2025-09-05T23:53:16.486114982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 23:53:17.735498 containerd[2163]: time="2025-09-05T23:53:17.733569708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:17.735498 containerd[2163]: time="2025-09-05T23:53:17.735347232Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125903" Sep 5 23:53:17.736182 containerd[2163]: time="2025-09-05T23:53:17.735652632Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:17.742823 containerd[2163]: time="2025-09-05T23:53:17.742737984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:17.744994 containerd[2163]: time="2025-09-05T23:53:17.744947316Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.25877939s" Sep 5 23:53:17.745276 containerd[2163]: time="2025-09-05T23:53:17.745132812Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 23:53:17.746176 containerd[2163]: time="2025-09-05T23:53:17.745746108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 23:53:18.032849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:53:18.043796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:18.444003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:18.456236 (kubelet)[2734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:53:18.574097 kubelet[2734]: E0905 23:53:18.573871 2734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:53:18.583278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:53:18.583742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:53:19.091981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450630947.mount: Deactivated successfully. Sep 5 23:53:19.635418 containerd[2163]: time="2025-09-05T23:53:19.635359993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:19.637431 containerd[2163]: time="2025-09-05T23:53:19.637074625Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916095" Sep 5 23:53:19.637431 containerd[2163]: time="2025-09-05T23:53:19.637366309Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:19.641125 containerd[2163]: time="2025-09-05T23:53:19.641069113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:19.642781 containerd[2163]: time="2025-09-05T23:53:19.642603073Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.896807705s" Sep 5 23:53:19.642781 containerd[2163]: time="2025-09-05T23:53:19.642653737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 23:53:19.644132 containerd[2163]: time="2025-09-05T23:53:19.644065993Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:53:20.222907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771028622.mount: Deactivated successfully. Sep 5 23:53:21.369532 containerd[2163]: time="2025-09-05T23:53:21.369139550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:21.371520 containerd[2163]: time="2025-09-05T23:53:21.371424650Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 5 23:53:21.374316 containerd[2163]: time="2025-09-05T23:53:21.374238794Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:21.382492 containerd[2163]: time="2025-09-05T23:53:21.380808974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:21.383451 containerd[2163]: time="2025-09-05T23:53:21.383400638Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.739257101s" Sep 5 23:53:21.383624 containerd[2163]: time="2025-09-05T23:53:21.383592830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:53:21.384544 containerd[2163]: time="2025-09-05T23:53:21.384351674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:53:21.905917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375911854.mount: Deactivated successfully. Sep 5 23:53:21.919522 containerd[2163]: time="2025-09-05T23:53:21.918263801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:21.920248 containerd[2163]: time="2025-09-05T23:53:21.920188337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 5 23:53:21.923087 containerd[2163]: time="2025-09-05T23:53:21.923017733Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:21.928099 containerd[2163]: time="2025-09-05T23:53:21.928032221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:21.929976 containerd[2163]: time="2025-09-05T23:53:21.929664689Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 545.253519ms" Sep 5 23:53:21.929976 containerd[2163]: time="2025-09-05T23:53:21.929714981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:53:21.931031 containerd[2163]: time="2025-09-05T23:53:21.930985961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 23:53:22.493686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337735066.mount: Deactivated successfully. Sep 5 23:53:25.637595 containerd[2163]: time="2025-09-05T23:53:25.637514383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:25.639957 containerd[2163]: time="2025-09-05T23:53:25.639888127Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 5 23:53:25.642231 containerd[2163]: time="2025-09-05T23:53:25.642146779Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:25.648811 containerd[2163]: time="2025-09-05T23:53:25.648735607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:25.651246 containerd[2163]: time="2025-09-05T23:53:25.651197311Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.720154602s" Sep 5 23:53:25.651584 containerd[2163]: time="2025-09-05T23:53:25.651387199Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 23:53:28.834051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:53:28.843910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:29.201984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:29.218565 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:53:29.307690 kubelet[2889]: E0905 23:53:29.307630 2889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:53:29.312842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:53:29.313198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:53:34.233763 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:34.245936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:34.303999 systemd[1]: Reloading requested from client PID 2906 ('systemctl') (unit session-7.scope)... Sep 5 23:53:34.304047 systemd[1]: Reloading... Sep 5 23:53:34.522510 zram_generator::config[2950]: No configuration found. Sep 5 23:53:34.803747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:53:34.985974 systemd[1]: Reloading finished in 681 ms. Sep 5 23:53:35.089329 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 23:53:35.089640 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 23:53:35.090886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:35.106040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:35.309046 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 5 23:53:35.447920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:35.466257 (kubelet)[3027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:53:35.545627 kubelet[3027]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:53:35.545627 kubelet[3027]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:53:35.545627 kubelet[3027]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:53:35.546277 kubelet[3027]: I0905 23:53:35.545805 3027 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:53:36.102241 kubelet[3027]: I0905 23:53:36.102185 3027 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:53:36.103562 kubelet[3027]: I0905 23:53:36.102406 3027 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:53:36.103562 kubelet[3027]: I0905 23:53:36.103015 3027 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:53:36.156826 kubelet[3027]: E0905 23:53:36.156649 3027 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:36.159025 kubelet[3027]: I0905 23:53:36.158534 3027 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:53:36.171322 kubelet[3027]: E0905 23:53:36.171112 3027 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:53:36.171322 kubelet[3027]: I0905 23:53:36.171164 3027 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:53:36.179248 kubelet[3027]: I0905 23:53:36.179186 3027 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:53:36.180247 kubelet[3027]: I0905 23:53:36.180202 3027 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:53:36.180592 kubelet[3027]: I0905 23:53:36.180527 3027 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:53:36.180902 kubelet[3027]: I0905 23:53:36.180600 3027 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:53:36.181223 kubelet[3027]: I0905 23:53:36.181189 3027 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:53:36.181301 kubelet[3027]: I0905 23:53:36.181227 3027 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:53:36.181649 kubelet[3027]: I0905 23:53:36.181611 3027 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:53:36.186550 kubelet[3027]: I0905 23:53:36.186431 3027 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:53:36.186550 kubelet[3027]: I0905 23:53:36.186545 3027 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:53:36.187620 kubelet[3027]: I0905 23:53:36.186600 3027 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:53:36.187620 kubelet[3027]: I0905 23:53:36.186635 3027 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:53:36.189054 kubelet[3027]: W0905 23:53:36.188438 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-93&limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:36.189054 kubelet[3027]: E0905 23:53:36.188584 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-93&limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:36.193904 kubelet[3027]: W0905 23:53:36.193822 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:36.194136 kubelet[3027]: E0905 23:53:36.194095 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:36.194736 kubelet[3027]: I0905 23:53:36.194691 3027 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:53:36.196543 kubelet[3027]: I0905 23:53:36.196098 3027 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:53:36.196543 kubelet[3027]: W0905 23:53:36.196338 3027 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:53:36.199089 kubelet[3027]: I0905 23:53:36.199022 3027 server.go:1274] "Started kubelet" Sep 5 23:53:36.202497 kubelet[3027]: I0905 23:53:36.201841 3027 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:53:36.208302 kubelet[3027]: I0905 23:53:36.208180 3027 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:53:36.209081 kubelet[3027]: I0905 23:53:36.208884 3027 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:53:36.210163 kubelet[3027]: I0905 23:53:36.210114 3027 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:53:36.213024 kubelet[3027]: E0905 23:53:36.210306 3027 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.93:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.93:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-93.18628812288c3478 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-93,UID:ip-172-31-22-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-93,},FirstTimestamp:2025-09-05 23:53:36.198980728 +0000 UTC m=+0.725775497,LastTimestamp:2025-09-05 23:53:36.198980728 +0000 UTC m=+0.725775497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-93,}" Sep 5 23:53:36.218135 kubelet[3027]: I0905 23:53:36.218070 3027 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:53:36.221998 kubelet[3027]: I0905 23:53:36.221893 3027 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:53:36.224429 kubelet[3027]: I0905 23:53:36.222641 3027 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:53:36.224429 kubelet[3027]: E0905 23:53:36.223170 3027 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-93\" not found" Sep 5 23:53:36.228794 kubelet[3027]: E0905 23:53:36.228733 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-93?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="200ms" Sep 5 23:53:36.230227 kubelet[3027]: I0905 23:53:36.230180 3027 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:53:36.235232 kubelet[3027]: W0905 23:53:36.235147 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:36.235488 kubelet[3027]: E0905 23:53:36.235426 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:36.237985 kubelet[3027]: E0905 23:53:36.237941 3027 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:53:36.238679 kubelet[3027]: I0905 23:53:36.238642 3027 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:53:36.238867 kubelet[3027]: I0905 23:53:36.238844 3027 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:53:36.239106 kubelet[3027]: I0905 23:53:36.239059 3027 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:53:36.239453 kubelet[3027]: I0905 23:53:36.239414 3027 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:53:36.280169 kubelet[3027]: I0905 23:53:36.280099 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:53:36.283792 kubelet[3027]: I0905 23:53:36.283726 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:53:36.283792 kubelet[3027]: I0905 23:53:36.283787 3027 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:53:36.284060 kubelet[3027]: I0905 23:53:36.283823 3027 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:53:36.284060 kubelet[3027]: E0905 23:53:36.283912 3027 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:53:36.288966 kubelet[3027]: W0905 23:53:36.288803 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:36.288966 kubelet[3027]: E0905 23:53:36.288906 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:36.303186 kubelet[3027]: I0905 23:53:36.303144 3027 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:53:36.303186 kubelet[3027]: I0905 23:53:36.303181 3027 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:53:36.303402 kubelet[3027]: I0905 23:53:36.303222 3027 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:53:36.308398 kubelet[3027]: I0905 23:53:36.308334 3027 policy_none.go:49] "None policy: Start" Sep 5 23:53:36.309931 kubelet[3027]: I0905 23:53:36.309890 3027 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:53:36.310408 kubelet[3027]: I0905 23:53:36.310174 3027 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:53:36.323599 kubelet[3027]: I0905 23:53:36.322819 3027 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:53:36.323599 kubelet[3027]: I0905 23:53:36.323164 3027 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:53:36.323599 kubelet[3027]: I0905 23:53:36.323190 3027 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:53:36.327167 kubelet[3027]: I0905 23:53:36.327130 3027 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:53:36.330037 kubelet[3027]: E0905 23:53:36.329972 3027 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-93\" not found" Sep 5 23:53:36.426894 kubelet[3027]: I0905 23:53:36.426041 3027 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-93" Sep 5 23:53:36.428512 kubelet[3027]: E0905 23:53:36.427289 3027 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.93:6443/api/v1/nodes\": dial tcp 172.31.22.93:6443: connect: connection refused" node="ip-172-31-22-93" Sep 5 23:53:36.429926 kubelet[3027]: E0905 23:53:36.429858 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-93?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="400ms" Sep 5 23:53:36.440044 kubelet[3027]: I0905 23:53:36.439539 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:36.440044 kubelet[3027]: I0905 23:53:36.439609 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:36.440044 kubelet[3027]: I0905 23:53:36.439651 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:36.440044 kubelet[3027]: I0905 23:53:36.439690 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/376f56cc6c0d09cb68d5b057752389cc-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-93\" (UID: \"376f56cc6c0d09cb68d5b057752389cc\") " pod="kube-system/kube-scheduler-ip-172-31-22-93" Sep 5 23:53:36.440044 kubelet[3027]: I0905 23:53:36.439725 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:36.440415 kubelet[3027]: I0905 23:53:36.439760 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:36.440415 kubelet[3027]: I0905 23:53:36.439795 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10c50d093d1e462c1d42c366b45ee9e5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-93\" (UID: \"10c50d093d1e462c1d42c366b45ee9e5\") " pod="kube-system/kube-apiserver-ip-172-31-22-93" Sep 5 23:53:36.440415 kubelet[3027]: I0905 23:53:36.439829 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10c50d093d1e462c1d42c366b45ee9e5-ca-certs\") pod \"kube-apiserver-ip-172-31-22-93\" (UID: \"10c50d093d1e462c1d42c366b45ee9e5\") " pod="kube-system/kube-apiserver-ip-172-31-22-93" Sep 5 23:53:36.440415 kubelet[3027]: I0905 23:53:36.439863 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10c50d093d1e462c1d42c366b45ee9e5-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-93\" (UID: \"10c50d093d1e462c1d42c366b45ee9e5\") " pod="kube-system/kube-apiserver-ip-172-31-22-93" Sep 5 23:53:36.629617 kubelet[3027]: I0905 23:53:36.629557 3027 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-93" Sep 5 23:53:36.630549 kubelet[3027]: E0905 23:53:36.630032 3027 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.93:6443/api/v1/nodes\": dial tcp 172.31.22.93:6443: connect: connection refused" node="ip-172-31-22-93" Sep 5 23:53:36.700516 containerd[2163]: time="2025-09-05T23:53:36.700274418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-93,Uid:10c50d093d1e462c1d42c366b45ee9e5,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:36.707311 containerd[2163]: time="2025-09-05T23:53:36.707245962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-93,Uid:68ee87d7cc0200492d4742afab226945,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:36.717511 containerd[2163]: time="2025-09-05T23:53:36.717085590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-93,Uid:376f56cc6c0d09cb68d5b057752389cc,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:36.831045 kubelet[3027]: E0905 23:53:36.830980 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-93?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="800ms" Sep 5 23:53:37.033334 kubelet[3027]: I0905 23:53:37.032803 3027 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-93" Sep 5 23:53:37.033497 kubelet[3027]: E0905 23:53:37.033362 3027 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.93:6443/api/v1/nodes\": dial tcp 172.31.22.93:6443: connect: connection refused" node="ip-172-31-22-93" Sep 5 23:53:37.089355 kubelet[3027]: W0905 23:53:37.089076 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-93&limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:37.089355 kubelet[3027]: E0905 23:53:37.089205 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-93&limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:37.241895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791130395.mount: Deactivated successfully. Sep 5 23:53:37.260965 containerd[2163]: time="2025-09-05T23:53:37.260840969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:37.263250 containerd[2163]: time="2025-09-05T23:53:37.263174081Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:37.265194 containerd[2163]: time="2025-09-05T23:53:37.265119809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 5 23:53:37.267504 containerd[2163]: time="2025-09-05T23:53:37.267232253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:53:37.269729 containerd[2163]: time="2025-09-05T23:53:37.269644073Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:37.272042 containerd[2163]: time="2025-09-05T23:53:37.271805909Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:37.274572 containerd[2163]: time="2025-09-05T23:53:37.274397897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:53:37.279586 containerd[2163]: time="2025-09-05T23:53:37.279428765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:37.286210 containerd[2163]: time="2025-09-05T23:53:37.285115253Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.887235ms" Sep 5 23:53:37.290975 containerd[2163]: time="2025-09-05T23:53:37.290536001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.168527ms" Sep 5 23:53:37.314029 containerd[2163]: time="2025-09-05T23:53:37.313698437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 613.315443ms" Sep 5 23:53:37.444216 kubelet[3027]: W0905 23:53:37.444135 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:37.446034 kubelet[3027]: E0905 23:53:37.444741 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:37.493346 containerd[2163]: time="2025-09-05T23:53:37.493127610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:37.493346 containerd[2163]: time="2025-09-05T23:53:37.493257366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:37.493346 containerd[2163]: time="2025-09-05T23:53:37.493296858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.494031 containerd[2163]: time="2025-09-05T23:53:37.493793394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.495246 containerd[2163]: time="2025-09-05T23:53:37.495037962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:37.495246 containerd[2163]: time="2025-09-05T23:53:37.495157182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:37.495246 containerd[2163]: time="2025-09-05T23:53:37.495184482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.496026 containerd[2163]: time="2025-09-05T23:53:37.495363654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.505997 containerd[2163]: time="2025-09-05T23:53:37.505441842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:37.506369 containerd[2163]: time="2025-09-05T23:53:37.505861854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:37.506887 containerd[2163]: time="2025-09-05T23:53:37.506592774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.507681 containerd[2163]: time="2025-09-05T23:53:37.507495102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.522122 kubelet[3027]: W0905 23:53:37.521734 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:37.522122 kubelet[3027]: E0905 23:53:37.521849 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:37.634167 kubelet[3027]: E0905 23:53:37.633703 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-93?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="1.6s" Sep 5 23:53:37.686676 containerd[2163]: time="2025-09-05T23:53:37.686236687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-93,Uid:10c50d093d1e462c1d42c366b45ee9e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d505cdb78fb63b5eb825cf5184fd574a9dc9921a286cd005ef2a667df86b454\"" Sep 5 23:53:37.691319 containerd[2163]: time="2025-09-05T23:53:37.691212571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-93,Uid:68ee87d7cc0200492d4742afab226945,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b46b5393ad6b85e8835065d3471cafff5e5aae715ebd951167348b244db3a0d\"" Sep 5 23:53:37.699577 containerd[2163]: time="2025-09-05T23:53:37.699514627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-93,Uid:376f56cc6c0d09cb68d5b057752389cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"69b34af34a21aa139d31362d9abfab244bd37f5da7b6d9c942541ce027765ff7\"" Sep 5 23:53:37.706420 containerd[2163]: time="2025-09-05T23:53:37.705291715Z" level=info msg="CreateContainer within sandbox \"4d505cdb78fb63b5eb825cf5184fd574a9dc9921a286cd005ef2a667df86b454\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:53:37.708125 containerd[2163]: time="2025-09-05T23:53:37.708043327Z" level=info msg="CreateContainer within sandbox \"5b46b5393ad6b85e8835065d3471cafff5e5aae715ebd951167348b244db3a0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:53:37.720502 containerd[2163]: time="2025-09-05T23:53:37.720372571Z" level=info msg="CreateContainer within sandbox \"69b34af34a21aa139d31362d9abfab244bd37f5da7b6d9c942541ce027765ff7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:53:37.757386 containerd[2163]: time="2025-09-05T23:53:37.757064551Z" level=info msg="CreateContainer within sandbox \"5b46b5393ad6b85e8835065d3471cafff5e5aae715ebd951167348b244db3a0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c00c5f9057deb8bcce6ed5dea13c82a1639a45b2eecc7942b62e070da912c51\"" Sep 5 23:53:37.758348 containerd[2163]: time="2025-09-05T23:53:37.758025835Z" level=info msg="StartContainer for \"7c00c5f9057deb8bcce6ed5dea13c82a1639a45b2eecc7942b62e070da912c51\"" Sep 5 23:53:37.771657 kubelet[3027]: W0905 23:53:37.771562 3027 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.93:6443: connect: connection refused Sep 5 23:53:37.771657 kubelet[3027]: E0905 23:53:37.771672 3027 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.93:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:53:37.778054 containerd[2163]: time="2025-09-05T23:53:37.777974707Z" level=info msg="CreateContainer within sandbox \"4d505cdb78fb63b5eb825cf5184fd574a9dc9921a286cd005ef2a667df86b454\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57df61ea58d44f428a85613c0a4a129e59242b66dca95785730c1bcd8d6100f4\"" Sep 5 23:53:37.778971 containerd[2163]: time="2025-09-05T23:53:37.778890523Z" level=info msg="StartContainer for \"57df61ea58d44f428a85613c0a4a129e59242b66dca95785730c1bcd8d6100f4\"" Sep 5 23:53:37.781303 containerd[2163]: time="2025-09-05T23:53:37.781063843Z" level=info msg="CreateContainer within sandbox \"69b34af34a21aa139d31362d9abfab244bd37f5da7b6d9c942541ce027765ff7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a6d048acfa1c9e13eca51199790c5a380fc85bfafd691bcace72db54ad9d7fed\"" Sep 5 23:53:37.782510 containerd[2163]: time="2025-09-05T23:53:37.782167111Z" level=info msg="StartContainer for \"a6d048acfa1c9e13eca51199790c5a380fc85bfafd691bcace72db54ad9d7fed\"" Sep 5 23:53:37.842509 kubelet[3027]: I0905 23:53:37.840900 3027 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-93" Sep 5 23:53:37.842509 kubelet[3027]: E0905 23:53:37.842218 3027 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.93:6443/api/v1/nodes\": dial tcp 172.31.22.93:6443: connect: connection refused" node="ip-172-31-22-93" Sep 5 23:53:37.992538 containerd[2163]: time="2025-09-05T23:53:37.990683025Z" level=info msg="StartContainer for \"7c00c5f9057deb8bcce6ed5dea13c82a1639a45b2eecc7942b62e070da912c51\" returns successfully" Sep 5 23:53:38.047118 containerd[2163]: time="2025-09-05T23:53:38.047047373Z" level=info msg="StartContainer for \"a6d048acfa1c9e13eca51199790c5a380fc85bfafd691bcace72db54ad9d7fed\" returns successfully" Sep 5 23:53:38.047637 containerd[2163]: time="2025-09-05T23:53:38.047421953Z" level=info msg="StartContainer for \"57df61ea58d44f428a85613c0a4a129e59242b66dca95785730c1bcd8d6100f4\" returns successfully" Sep 5 23:53:39.450631 kubelet[3027]: I0905 23:53:39.448821 3027 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-93" Sep 5 23:53:43.100492 kubelet[3027]: E0905 23:53:43.099680 3027 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-93\" not found" node="ip-172-31-22-93" Sep 5 23:53:43.200497 kubelet[3027]: I0905 23:53:43.198914 3027 apiserver.go:52] "Watching apiserver" Sep 5 23:53:43.231523 kubelet[3027]: I0905 23:53:43.230645 3027 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:53:43.257150 kubelet[3027]: I0905 23:53:43.257094 3027 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-93" Sep 5 23:53:43.257150 kubelet[3027]: E0905 23:53:43.257151 3027 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-22-93\": node \"ip-172-31-22-93\" not found" Sep 5 23:53:43.726651 kubelet[3027]: E0905 23:53:43.725809 3027 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-22-93\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:45.440921 systemd[1]: Reloading requested from client PID 3306 ('systemctl') (unit session-7.scope)... Sep 5 23:53:45.440954 systemd[1]: Reloading... Sep 5 23:53:45.578546 zram_generator::config[3346]: No configuration found. Sep 5 23:53:45.917765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:53:46.108942 systemd[1]: Reloading finished in 667 ms. Sep 5 23:53:46.168141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:46.190599 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:53:46.191266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:46.210519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:46.569829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:46.587331 (kubelet)[3416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:53:46.719127 kubelet[3416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:53:46.719127 kubelet[3416]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:53:46.721294 kubelet[3416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:53:46.721294 kubelet[3416]: I0905 23:53:46.719282 3416 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:53:46.732520 kubelet[3416]: I0905 23:53:46.732444 3416 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:53:46.732520 kubelet[3416]: I0905 23:53:46.732512 3416 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:53:46.733620 kubelet[3416]: I0905 23:53:46.733146 3416 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:53:46.738354 kubelet[3416]: I0905 23:53:46.737993 3416 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:53:46.743450 kubelet[3416]: I0905 23:53:46.743315 3416 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:53:46.771244 kubelet[3416]: E0905 23:53:46.771174 3416 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:53:46.771244 kubelet[3416]: I0905 23:53:46.771238 3416 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:53:46.783850 kubelet[3416]: I0905 23:53:46.783328 3416 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:53:46.785739 kubelet[3416]: I0905 23:53:46.785091 3416 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:53:46.787009 kubelet[3416]: I0905 23:53:46.786368 3416 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:53:46.787009 kubelet[3416]: I0905 23:53:46.786450 3416 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:53:46.787009 kubelet[3416]: I0905 23:53:46.786860 3416 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:53:46.787009 kubelet[3416]: I0905 23:53:46.786889 3416 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:53:46.787386 kubelet[3416]: I0905 23:53:46.786962 3416 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:53:46.789511 kubelet[3416]: I0905 23:53:46.789349 3416 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:53:46.789511 kubelet[3416]: I0905 23:53:46.789411 3416 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:53:46.789511 kubelet[3416]: I0905 23:53:46.789492 3416 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:53:46.789755 kubelet[3416]: I0905 23:53:46.789526 3416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:53:46.795390 kubelet[3416]: I0905 23:53:46.791948 3416 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:53:46.795390 kubelet[3416]: I0905 23:53:46.792741 3416 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:53:46.795906 kubelet[3416]: I0905 23:53:46.793455 3416 server.go:1274] "Started kubelet" Sep 5 23:53:46.803709 kubelet[3416]: I0905 23:53:46.803657 3416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:53:46.824883 kubelet[3416]: I0905 23:53:46.821705 3416 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:53:46.837125 kubelet[3416]: I0905 23:53:46.837057 3416 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:53:46.846083 kubelet[3416]: I0905 23:53:46.843675 3416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:53:46.846083 kubelet[3416]: I0905 23:53:46.844192 3416 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:53:46.854906 kubelet[3416]: I0905 23:53:46.854848 3416 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:53:46.856546 kubelet[3416]: E0905 23:53:46.855225 3416 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-93\" not found" Sep 5 23:53:46.856546 kubelet[3416]: I0905 23:53:46.856087 3416 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:53:46.913183 kubelet[3416]: I0905 23:53:46.912894 3416 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:53:46.913183 kubelet[3416]: I0905 23:53:46.913103 3416 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:53:46.925557 kubelet[3416]: I0905 23:53:46.925415 3416 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:53:46.939371 kubelet[3416]: I0905 23:53:46.925510 3416 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:53:46.952655 kubelet[3416]: E0905 23:53:46.952618 3416 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:53:46.955529 kubelet[3416]: I0905 23:53:46.955359 3416 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:53:46.985991 kubelet[3416]: I0905 23:53:46.985854 3416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:53:46.990693 kubelet[3416]: E0905 23:53:46.990630 3416 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-93\" not found" Sep 5 23:53:47.003546 kubelet[3416]: I0905 23:53:47.003377 3416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:53:47.006564 kubelet[3416]: I0905 23:53:47.006195 3416 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:53:47.006564 kubelet[3416]: I0905 23:53:47.006262 3416 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:53:47.006564 kubelet[3416]: E0905 23:53:47.006332 3416 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:53:47.107717 kubelet[3416]: E0905 23:53:47.106509 3416 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 23:53:47.150963 kubelet[3416]: I0905 23:53:47.150824 3416 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:53:47.150963 kubelet[3416]: I0905 23:53:47.150853 3416 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:53:47.150963 kubelet[3416]: I0905 23:53:47.150886 3416 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:53:47.151934 kubelet[3416]: I0905 23:53:47.151763 3416 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:53:47.151934 kubelet[3416]: I0905 23:53:47.151794 3416 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:53:47.151934 kubelet[3416]: I0905 23:53:47.151848 3416 policy_none.go:49] "None policy: Start" Sep 5 23:53:47.153598 kubelet[3416]: I0905 23:53:47.153385 3416 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:53:47.153598 kubelet[3416]: I0905 23:53:47.153424 3416 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:53:47.154653 kubelet[3416]: I0905 23:53:47.154075 3416 state_mem.go:75] "Updated machine memory state" Sep 5 23:53:47.158385 kubelet[3416]: I0905 23:53:47.158349 3416 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:53:47.158850 kubelet[3416]: I0905 23:53:47.158829 3416 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:53:47.159019 kubelet[3416]: I0905 23:53:47.158967 3416 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:53:47.164766 kubelet[3416]: I0905 23:53:47.164599 3416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:53:47.281770 kubelet[3416]: I0905 23:53:47.281170 3416 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-93" Sep 5 23:53:47.294946 kubelet[3416]: I0905 23:53:47.294898 3416 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-22-93" Sep 5 23:53:47.296635 kubelet[3416]: I0905 23:53:47.295727 3416 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-93" Sep 5 23:53:47.329630 kubelet[3416]: E0905 23:53:47.328146 3416 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-93\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-93" Sep 5 23:53:47.444972 kubelet[3416]: I0905 23:53:47.443630 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:47.444972 kubelet[3416]: I0905 23:53:47.443712 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10c50d093d1e462c1d42c366b45ee9e5-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-93\" (UID: \"10c50d093d1e462c1d42c366b45ee9e5\") " pod="kube-system/kube-apiserver-ip-172-31-22-93" Sep 5 23:53:47.444972 kubelet[3416]: I0905 23:53:47.443754 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10c50d093d1e462c1d42c366b45ee9e5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-93\" (UID: \"10c50d093d1e462c1d42c366b45ee9e5\") " pod="kube-system/kube-apiserver-ip-172-31-22-93" Sep 5 23:53:47.444972 kubelet[3416]: I0905 23:53:47.443798 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:47.444972 kubelet[3416]: I0905 23:53:47.443854 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:47.446304 kubelet[3416]: I0905 23:53:47.443895 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10c50d093d1e462c1d42c366b45ee9e5-ca-certs\") pod \"kube-apiserver-ip-172-31-22-93\" (UID: \"10c50d093d1e462c1d42c366b45ee9e5\") " pod="kube-system/kube-apiserver-ip-172-31-22-93" Sep 5 23:53:47.446304 kubelet[3416]: I0905 23:53:47.443930 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:47.446304 kubelet[3416]: I0905 23:53:47.443980 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68ee87d7cc0200492d4742afab226945-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-93\" (UID: \"68ee87d7cc0200492d4742afab226945\") " pod="kube-system/kube-controller-manager-ip-172-31-22-93" Sep 5 23:53:47.446304 kubelet[3416]: I0905 23:53:47.444020 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/376f56cc6c0d09cb68d5b057752389cc-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-93\" (UID: \"376f56cc6c0d09cb68d5b057752389cc\") " pod="kube-system/kube-scheduler-ip-172-31-22-93" Sep 5 23:53:47.810585 kubelet[3416]: I0905 23:53:47.810524 3416 apiserver.go:52] "Watching apiserver" Sep 5 23:53:47.856353 kubelet[3416]: I0905 23:53:47.856307 3416 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:53:48.063629 kubelet[3416]: E0905 23:53:48.063423 3416 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-93\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-93" Sep 5 23:53:48.109061 kubelet[3416]: I0905 23:53:48.107187 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-93" podStartSLOduration=1.1069496189999999 podStartE2EDuration="1.106949619s" podCreationTimestamp="2025-09-05 23:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:48.106969191 +0000 UTC m=+1.511818533" watchObservedRunningTime="2025-09-05 23:53:48.106949619 +0000 UTC m=+1.511798961" Sep 5 23:53:48.147818 kubelet[3416]: I0905 23:53:48.146940 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-93" podStartSLOduration=3.146917515 podStartE2EDuration="3.146917515s" podCreationTimestamp="2025-09-05 23:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:48.126877527 +0000 UTC m=+1.531726881" watchObservedRunningTime="2025-09-05 23:53:48.146917515 +0000 UTC m=+1.551766881" Sep 5 23:53:48.176577 kubelet[3416]: I0905 23:53:48.174355 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-93" podStartSLOduration=1.174332871 podStartE2EDuration="1.174332871s" podCreationTimestamp="2025-09-05 23:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:48.147369339 +0000 UTC m=+1.552218813" watchObservedRunningTime="2025-09-05 23:53:48.174332871 +0000 UTC m=+1.579182225" Sep 5 23:53:49.250015 update_engine[2120]: I20250905 23:53:49.249922 2120 update_attempter.cc:509] Updating boot flags... Sep 5 23:53:49.338534 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3475) Sep 5 23:53:49.623412 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3476) Sep 5 23:53:49.904503 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3476) Sep 5 23:53:51.340173 kubelet[3416]: I0905 23:53:51.339989 3416 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:53:51.342869 containerd[2163]: time="2025-09-05T23:53:51.342258571Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:53:51.343833 kubelet[3416]: I0905 23:53:51.342615 3416 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:53:52.081174 kubelet[3416]: I0905 23:53:52.081104 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a60f820-d2b8-4290-912d-29de5dc81e0e-lib-modules\") pod \"kube-proxy-kzd5j\" (UID: \"9a60f820-d2b8-4290-912d-29de5dc81e0e\") " pod="kube-system/kube-proxy-kzd5j" Sep 5 23:53:52.081339 kubelet[3416]: I0905 23:53:52.081175 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d2n\" (UniqueName: \"kubernetes.io/projected/9a60f820-d2b8-4290-912d-29de5dc81e0e-kube-api-access-b5d2n\") pod \"kube-proxy-kzd5j\" (UID: \"9a60f820-d2b8-4290-912d-29de5dc81e0e\") " pod="kube-system/kube-proxy-kzd5j" Sep 5 23:53:52.081339 kubelet[3416]: I0905 23:53:52.081222 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a60f820-d2b8-4290-912d-29de5dc81e0e-kube-proxy\") pod \"kube-proxy-kzd5j\" (UID: \"9a60f820-d2b8-4290-912d-29de5dc81e0e\") " pod="kube-system/kube-proxy-kzd5j" Sep 5 23:53:52.081339 kubelet[3416]: I0905 23:53:52.081257 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a60f820-d2b8-4290-912d-29de5dc81e0e-xtables-lock\") pod \"kube-proxy-kzd5j\" (UID: \"9a60f820-d2b8-4290-912d-29de5dc81e0e\") " pod="kube-system/kube-proxy-kzd5j" Sep 5 23:53:52.276087 containerd[2163]: time="2025-09-05T23:53:52.276006799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kzd5j,Uid:9a60f820-d2b8-4290-912d-29de5dc81e0e,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:52.383812 containerd[2163]: time="2025-09-05T23:53:52.381050492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:52.383812 containerd[2163]: time="2025-09-05T23:53:52.381146096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:52.383812 containerd[2163]: time="2025-09-05T23:53:52.381172448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.383812 containerd[2163]: time="2025-09-05T23:53:52.381331520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.535441 containerd[2163]: time="2025-09-05T23:53:52.535377129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kzd5j,Uid:9a60f820-d2b8-4290-912d-29de5dc81e0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a8281e81b143670a91fcdac0c6b67ed4c21c08cac0695aab76a59d629035430\"" Sep 5 23:53:52.543620 containerd[2163]: time="2025-09-05T23:53:52.543553725Z" level=info msg="CreateContainer within sandbox \"8a8281e81b143670a91fcdac0c6b67ed4c21c08cac0695aab76a59d629035430\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:53:52.565501 containerd[2163]: time="2025-09-05T23:53:52.565381677Z" level=info msg="CreateContainer within sandbox \"8a8281e81b143670a91fcdac0c6b67ed4c21c08cac0695aab76a59d629035430\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec0a51301b9d7acd43e05f8b1b01e7de7d4d5bef3bcfe94f52b6963ba7d9b93c\"" Sep 5 23:53:52.569532 containerd[2163]: time="2025-09-05T23:53:52.568769145Z" level=info msg="StartContainer for \"ec0a51301b9d7acd43e05f8b1b01e7de7d4d5bef3bcfe94f52b6963ba7d9b93c\"" Sep 5 23:53:52.584201 kubelet[3416]: I0905 23:53:52.583880 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd99j\" (UniqueName: \"kubernetes.io/projected/416c1b11-871f-4b02-bdbb-4827f5f1e9c7-kube-api-access-qd99j\") pod \"tigera-operator-58fc44c59b-n2vld\" (UID: \"416c1b11-871f-4b02-bdbb-4827f5f1e9c7\") " pod="tigera-operator/tigera-operator-58fc44c59b-n2vld" Sep 5 23:53:52.584201 kubelet[3416]: I0905 23:53:52.584141 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/416c1b11-871f-4b02-bdbb-4827f5f1e9c7-var-lib-calico\") pod \"tigera-operator-58fc44c59b-n2vld\" (UID: \"416c1b11-871f-4b02-bdbb-4827f5f1e9c7\") " pod="tigera-operator/tigera-operator-58fc44c59b-n2vld" Sep 5 23:53:52.694403 containerd[2163]: time="2025-09-05T23:53:52.692661490Z" level=info msg="StartContainer for \"ec0a51301b9d7acd43e05f8b1b01e7de7d4d5bef3bcfe94f52b6963ba7d9b93c\" returns successfully" Sep 5 23:53:52.811084 containerd[2163]: time="2025-09-05T23:53:52.811018294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-n2vld,Uid:416c1b11-871f-4b02-bdbb-4827f5f1e9c7,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:53:52.865415 containerd[2163]: time="2025-09-05T23:53:52.864286210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:52.865415 containerd[2163]: time="2025-09-05T23:53:52.864405334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:52.865415 containerd[2163]: time="2025-09-05T23:53:52.864444610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.865415 containerd[2163]: time="2025-09-05T23:53:52.864686122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:53.018004 containerd[2163]: time="2025-09-05T23:53:53.017884483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-n2vld,Uid:416c1b11-871f-4b02-bdbb-4827f5f1e9c7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9e2d25e9f10b55d00d7374727841f3aa5cc3fa3481f7b2fcc99c3883e61e8d7\"" Sep 5 23:53:53.025027 containerd[2163]: time="2025-09-05T23:53:53.024778747Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:53:53.086936 kubelet[3416]: I0905 23:53:53.086494 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kzd5j" podStartSLOduration=2.086443171 podStartE2EDuration="2.086443171s" podCreationTimestamp="2025-09-05 23:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:53.086271235 +0000 UTC m=+6.491120601" watchObservedRunningTime="2025-09-05 23:53:53.086443171 +0000 UTC m=+6.491292513" Sep 5 23:53:54.576433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270809697.mount: Deactivated successfully. Sep 5 23:53:56.220536 containerd[2163]: time="2025-09-05T23:53:56.220325183Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:56.222804 containerd[2163]: time="2025-09-05T23:53:56.222524627Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:53:56.225511 containerd[2163]: time="2025-09-05T23:53:56.225409259Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:56.231590 containerd[2163]: time="2025-09-05T23:53:56.231444395Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:56.233957 containerd[2163]: time="2025-09-05T23:53:56.233441867Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.208540156s" Sep 5 23:53:56.233957 containerd[2163]: time="2025-09-05T23:53:56.233592227Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:53:56.241383 containerd[2163]: time="2025-09-05T23:53:56.241329383Z" level=info msg="CreateContainer within sandbox \"f9e2d25e9f10b55d00d7374727841f3aa5cc3fa3481f7b2fcc99c3883e61e8d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:53:56.271723 containerd[2163]: time="2025-09-05T23:53:56.271650455Z" level=info msg="CreateContainer within sandbox \"f9e2d25e9f10b55d00d7374727841f3aa5cc3fa3481f7b2fcc99c3883e61e8d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468\"" Sep 5 23:53:56.273606 containerd[2163]: time="2025-09-05T23:53:56.272934575Z" level=info msg="StartContainer for \"b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468\"" Sep 5 23:53:56.277656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824407419.mount: Deactivated successfully. Sep 5 23:53:56.383892 containerd[2163]: time="2025-09-05T23:53:56.383827056Z" level=info msg="StartContainer for \"b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468\" returns successfully" Sep 5 23:53:57.134565 kubelet[3416]: I0905 23:53:57.132732 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-n2vld" podStartSLOduration=1.917291136 podStartE2EDuration="5.132708144s" podCreationTimestamp="2025-09-05 23:53:52 +0000 UTC" firstStartedPulling="2025-09-05 23:53:53.021107011 +0000 UTC m=+6.425956365" lastFinishedPulling="2025-09-05 23:53:56.236524031 +0000 UTC m=+9.641373373" observedRunningTime="2025-09-05 23:53:57.111614591 +0000 UTC m=+10.516463921" watchObservedRunningTime="2025-09-05 23:53:57.132708144 +0000 UTC m=+10.537557486" Sep 5 23:54:03.443589 sudo[2497]: pam_unix(sudo:session): session closed for user root Sep 5 23:54:03.470227 sshd[2492]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:03.481787 systemd[1]: sshd@6-172.31.22.93:22-139.178.68.195:35446.service: Deactivated successfully. Sep 5 23:54:03.500061 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:54:03.505579 systemd-logind[2117]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:54:03.509704 systemd-logind[2117]: Removed session 7. Sep 5 23:54:17.575646 kubelet[3416]: I0905 23:54:17.575370 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1860a1c7-fc54-4413-abf4-88f9e49fff59-typha-certs\") pod \"calico-typha-5d74799dd9-q68nt\" (UID: \"1860a1c7-fc54-4413-abf4-88f9e49fff59\") " pod="calico-system/calico-typha-5d74799dd9-q68nt" Sep 5 23:54:17.575646 kubelet[3416]: I0905 23:54:17.575443 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvg6f\" (UniqueName: \"kubernetes.io/projected/1860a1c7-fc54-4413-abf4-88f9e49fff59-kube-api-access-tvg6f\") pod \"calico-typha-5d74799dd9-q68nt\" (UID: \"1860a1c7-fc54-4413-abf4-88f9e49fff59\") " pod="calico-system/calico-typha-5d74799dd9-q68nt" Sep 5 23:54:17.575646 kubelet[3416]: I0905 23:54:17.575521 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1860a1c7-fc54-4413-abf4-88f9e49fff59-tigera-ca-bundle\") pod \"calico-typha-5d74799dd9-q68nt\" (UID: \"1860a1c7-fc54-4413-abf4-88f9e49fff59\") " pod="calico-system/calico-typha-5d74799dd9-q68nt" Sep 5 23:54:17.813562 containerd[2163]: time="2025-09-05T23:54:17.812878654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d74799dd9-q68nt,Uid:1860a1c7-fc54-4413-abf4-88f9e49fff59,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:17.965031 containerd[2163]: time="2025-09-05T23:54:17.964193339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:17.965031 containerd[2163]: time="2025-09-05T23:54:17.964319399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:17.965031 containerd[2163]: time="2025-09-05T23:54:17.964360871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:17.965031 containerd[2163]: time="2025-09-05T23:54:17.964602239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:18.081505 kubelet[3416]: I0905 23:54:18.079747 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-flexvol-driver-host\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081505 kubelet[3416]: I0905 23:54:18.079816 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-var-run-calico\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081505 kubelet[3416]: I0905 23:54:18.079858 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-lib-modules\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081505 kubelet[3416]: I0905 23:54:18.079918 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-node-certs\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081505 kubelet[3416]: I0905 23:54:18.079972 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-var-lib-calico\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081954 kubelet[3416]: I0905 23:54:18.080025 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-policysync\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081954 kubelet[3416]: I0905 23:54:18.080106 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-xtables-lock\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081954 kubelet[3416]: I0905 23:54:18.080154 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-cni-bin-dir\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.081954 kubelet[3416]: I0905 23:54:18.080207 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-cni-log-dir\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.086537 kubelet[3416]: I0905 23:54:18.085509 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wthb\" (UniqueName: \"kubernetes.io/projected/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-kube-api-access-2wthb\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.086537 kubelet[3416]: I0905 23:54:18.085698 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-cni-net-dir\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.086537 kubelet[3416]: I0905 23:54:18.085764 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f8ab59-c7e8-495b-8698-36a395c5cfeb-tigera-ca-bundle\") pod \"calico-node-qt7g8\" (UID: \"e2f8ab59-c7e8-495b-8698-36a395c5cfeb\") " pod="calico-system/calico-node-qt7g8" Sep 5 23:54:18.225928 kubelet[3416]: E0905 23:54:18.221079 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.225928 kubelet[3416]: W0905 23:54:18.221124 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.225928 kubelet[3416]: E0905 23:54:18.221177 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.273587 kubelet[3416]: E0905 23:54:18.273173 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.273587 kubelet[3416]: W0905 23:54:18.273219 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.273798 kubelet[3416]: E0905 23:54:18.273685 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.306241 containerd[2163]: time="2025-09-05T23:54:18.306121809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d74799dd9-q68nt,Uid:1860a1c7-fc54-4413-abf4-88f9e49fff59,Namespace:calico-system,Attempt:0,} returns sandbox id \"43640b61488562bfae03d68c190e763a238000bb063436dae5ea59c827de1dac\"" Sep 5 23:54:18.312507 containerd[2163]: time="2025-09-05T23:54:18.310864065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:54:18.359031 kubelet[3416]: E0905 23:54:18.358938 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfwvm" podUID="27939f7c-5277-453f-aea0-098e23380a31" Sep 5 23:54:18.417816 kubelet[3416]: E0905 23:54:18.417761 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.417816 kubelet[3416]: W0905 23:54:18.417804 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.418069 kubelet[3416]: E0905 23:54:18.417840 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.421220 kubelet[3416]: E0905 23:54:18.421105 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.421220 kubelet[3416]: W0905 23:54:18.421151 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.421220 kubelet[3416]: E0905 23:54:18.421187 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.424492 kubelet[3416]: E0905 23:54:18.423793 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.424492 kubelet[3416]: W0905 23:54:18.423842 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.424492 kubelet[3416]: E0905 23:54:18.424195 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.426533 kubelet[3416]: E0905 23:54:18.426410 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.426533 kubelet[3416]: W0905 23:54:18.426455 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.426533 kubelet[3416]: E0905 23:54:18.426510 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.429064 kubelet[3416]: E0905 23:54:18.428988 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.429064 kubelet[3416]: W0905 23:54:18.429049 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.429313 kubelet[3416]: E0905 23:54:18.429085 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.431519 kubelet[3416]: E0905 23:54:18.431426 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.431519 kubelet[3416]: W0905 23:54:18.431482 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.431519 kubelet[3416]: E0905 23:54:18.431517 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.435713 kubelet[3416]: E0905 23:54:18.435540 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.435713 kubelet[3416]: W0905 23:54:18.435597 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.435713 kubelet[3416]: E0905 23:54:18.435632 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.437213 kubelet[3416]: E0905 23:54:18.437128 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.437213 kubelet[3416]: W0905 23:54:18.437173 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.437213 kubelet[3416]: E0905 23:54:18.437210 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.439698 kubelet[3416]: E0905 23:54:18.439648 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.439698 kubelet[3416]: W0905 23:54:18.439688 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.440272 kubelet[3416]: E0905 23:54:18.439727 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.444688 kubelet[3416]: E0905 23:54:18.444618 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.444688 kubelet[3416]: W0905 23:54:18.444665 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.444892 kubelet[3416]: E0905 23:54:18.444701 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.450808 kubelet[3416]: E0905 23:54:18.449628 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.450808 kubelet[3416]: W0905 23:54:18.449669 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.450808 kubelet[3416]: E0905 23:54:18.449704 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.452552 kubelet[3416]: E0905 23:54:18.452498 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.452552 kubelet[3416]: W0905 23:54:18.452540 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.452765 kubelet[3416]: E0905 23:54:18.452575 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.454911 kubelet[3416]: E0905 23:54:18.454565 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.454911 kubelet[3416]: W0905 23:54:18.454611 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.454911 kubelet[3416]: E0905 23:54:18.454646 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.456660 kubelet[3416]: E0905 23:54:18.456582 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.456660 kubelet[3416]: W0905 23:54:18.456627 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.456660 kubelet[3416]: E0905 23:54:18.456662 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.458007 kubelet[3416]: E0905 23:54:18.457120 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.458007 kubelet[3416]: W0905 23:54:18.457158 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.458007 kubelet[3416]: E0905 23:54:18.457188 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.459633 kubelet[3416]: E0905 23:54:18.458617 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.459633 kubelet[3416]: W0905 23:54:18.458648 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.459633 kubelet[3416]: E0905 23:54:18.458680 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.459896 kubelet[3416]: E0905 23:54:18.459846 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.459896 kubelet[3416]: W0905 23:54:18.459888 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.460034 kubelet[3416]: E0905 23:54:18.459926 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.461511 kubelet[3416]: E0905 23:54:18.460913 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.461511 kubelet[3416]: W0905 23:54:18.460958 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.461511 kubelet[3416]: E0905 23:54:18.460991 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.462524 kubelet[3416]: E0905 23:54:18.462174 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.462524 kubelet[3416]: W0905 23:54:18.462221 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.462524 kubelet[3416]: E0905 23:54:18.462260 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.464549 kubelet[3416]: E0905 23:54:18.463308 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.464549 kubelet[3416]: W0905 23:54:18.463383 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.464549 kubelet[3416]: E0905 23:54:18.463423 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.493149 kubelet[3416]: E0905 23:54:18.493008 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.494846 kubelet[3416]: W0905 23:54:18.493755 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.494846 kubelet[3416]: E0905 23:54:18.493821 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.495594 kubelet[3416]: I0905 23:54:18.494205 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8j46\" (UniqueName: \"kubernetes.io/projected/27939f7c-5277-453f-aea0-098e23380a31-kube-api-access-r8j46\") pod \"csi-node-driver-lfwvm\" (UID: \"27939f7c-5277-453f-aea0-098e23380a31\") " pod="calico-system/csi-node-driver-lfwvm" Sep 5 23:54:18.498065 kubelet[3416]: E0905 23:54:18.497166 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.498065 kubelet[3416]: W0905 23:54:18.497207 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.498981 kubelet[3416]: E0905 23:54:18.498718 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.500295 kubelet[3416]: E0905 23:54:18.499578 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.500295 kubelet[3416]: W0905 23:54:18.499617 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.500295 kubelet[3416]: E0905 23:54:18.499679 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.500295 kubelet[3416]: I0905 23:54:18.499726 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27939f7c-5277-453f-aea0-098e23380a31-kubelet-dir\") pod \"csi-node-driver-lfwvm\" (UID: \"27939f7c-5277-453f-aea0-098e23380a31\") " pod="calico-system/csi-node-driver-lfwvm" Sep 5 23:54:18.503805 kubelet[3416]: E0905 23:54:18.503025 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.503805 kubelet[3416]: W0905 23:54:18.503063 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.503805 kubelet[3416]: E0905 23:54:18.503099 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.506555 kubelet[3416]: E0905 23:54:18.506256 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.506555 kubelet[3416]: W0905 23:54:18.506293 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.506555 kubelet[3416]: E0905 23:54:18.506350 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.510528 kubelet[3416]: E0905 23:54:18.509249 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.510528 kubelet[3416]: W0905 23:54:18.509293 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.511622 kubelet[3416]: E0905 23:54:18.510826 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.512634 kubelet[3416]: E0905 23:54:18.512003 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.512634 kubelet[3416]: W0905 23:54:18.512042 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.512634 kubelet[3416]: E0905 23:54:18.512078 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.513792 kubelet[3416]: I0905 23:54:18.513699 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27939f7c-5277-453f-aea0-098e23380a31-registration-dir\") pod \"csi-node-driver-lfwvm\" (UID: \"27939f7c-5277-453f-aea0-098e23380a31\") " pod="calico-system/csi-node-driver-lfwvm" Sep 5 23:54:18.514947 kubelet[3416]: E0905 23:54:18.514057 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.514947 kubelet[3416]: W0905 23:54:18.514085 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.514947 kubelet[3416]: E0905 23:54:18.514128 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.516682 kubelet[3416]: E0905 23:54:18.516174 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.516682 kubelet[3416]: W0905 23:54:18.516217 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.517234 kubelet[3416]: E0905 23:54:18.516881 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.519427 kubelet[3416]: E0905 23:54:18.519055 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.519427 kubelet[3416]: W0905 23:54:18.519118 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.519427 kubelet[3416]: E0905 23:54:18.519157 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.519427 kubelet[3416]: I0905 23:54:18.519225 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27939f7c-5277-453f-aea0-098e23380a31-varrun\") pod \"csi-node-driver-lfwvm\" (UID: \"27939f7c-5277-453f-aea0-098e23380a31\") " pod="calico-system/csi-node-driver-lfwvm" Sep 5 23:54:18.523515 kubelet[3416]: E0905 23:54:18.522007 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.523515 kubelet[3416]: W0905 23:54:18.522044 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.523515 kubelet[3416]: E0905 23:54:18.522086 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.526531 kubelet[3416]: E0905 23:54:18.524722 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.526531 kubelet[3416]: W0905 23:54:18.524764 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.527028 kubelet[3416]: E0905 23:54:18.526803 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.531519 kubelet[3416]: E0905 23:54:18.529429 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.531519 kubelet[3416]: W0905 23:54:18.529504 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.531519 kubelet[3416]: E0905 23:54:18.529542 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.531519 kubelet[3416]: I0905 23:54:18.529601 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27939f7c-5277-453f-aea0-098e23380a31-socket-dir\") pod \"csi-node-driver-lfwvm\" (UID: \"27939f7c-5277-453f-aea0-098e23380a31\") " pod="calico-system/csi-node-driver-lfwvm" Sep 5 23:54:18.532678 kubelet[3416]: E0905 23:54:18.532631 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.533784 kubelet[3416]: W0905 23:54:18.533511 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.533784 kubelet[3416]: E0905 23:54:18.533570 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.535587 kubelet[3416]: E0905 23:54:18.535524 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.535874 kubelet[3416]: W0905 23:54:18.535776 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.535874 kubelet[3416]: E0905 23:54:18.535825 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.555536 containerd[2163]: time="2025-09-05T23:54:18.554813566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qt7g8,Uid:e2f8ab59-c7e8-495b-8698-36a395c5cfeb,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:18.630863 kubelet[3416]: E0905 23:54:18.630730 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.630863 kubelet[3416]: W0905 23:54:18.630801 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.633052 kubelet[3416]: E0905 23:54:18.630967 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.634144 kubelet[3416]: E0905 23:54:18.633570 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.634144 kubelet[3416]: W0905 23:54:18.633643 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.634144 kubelet[3416]: E0905 23:54:18.633730 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.635211 kubelet[3416]: E0905 23:54:18.634896 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.635211 kubelet[3416]: W0905 23:54:18.634959 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.636793 kubelet[3416]: E0905 23:54:18.635018 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.637449 kubelet[3416]: E0905 23:54:18.637110 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.637449 kubelet[3416]: W0905 23:54:18.637145 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.637449 kubelet[3416]: E0905 23:54:18.637206 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.638234 kubelet[3416]: E0905 23:54:18.638197 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.639046 kubelet[3416]: W0905 23:54:18.638993 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.641017 kubelet[3416]: E0905 23:54:18.639313 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.641017 kubelet[3416]: E0905 23:54:18.639910 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.641017 kubelet[3416]: W0905 23:54:18.639936 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.641017 kubelet[3416]: E0905 23:54:18.640109 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.641017 kubelet[3416]: E0905 23:54:18.640938 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.641017 kubelet[3416]: W0905 23:54:18.640972 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.643877 kubelet[3416]: E0905 23:54:18.642142 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.644832 kubelet[3416]: E0905 23:54:18.644250 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.644832 kubelet[3416]: W0905 23:54:18.644288 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.646844 kubelet[3416]: E0905 23:54:18.645566 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.648787 kubelet[3416]: E0905 23:54:18.648217 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.648787 kubelet[3416]: W0905 23:54:18.648323 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.649785 kubelet[3416]: E0905 23:54:18.648689 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.650568 kubelet[3416]: E0905 23:54:18.650147 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.650568 kubelet[3416]: W0905 23:54:18.650183 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.650568 kubelet[3416]: E0905 23:54:18.650512 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.653345 kubelet[3416]: E0905 23:54:18.652000 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.653345 kubelet[3416]: W0905 23:54:18.652042 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.653345 kubelet[3416]: E0905 23:54:18.653266 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.655125 kubelet[3416]: E0905 23:54:18.654574 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.655125 kubelet[3416]: W0905 23:54:18.654612 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.655125 kubelet[3416]: E0905 23:54:18.654935 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.656768 kubelet[3416]: E0905 23:54:18.656317 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.656768 kubelet[3416]: W0905 23:54:18.656353 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.657804 kubelet[3416]: E0905 23:54:18.657033 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.659360 kubelet[3416]: E0905 23:54:18.658632 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.659360 kubelet[3416]: W0905 23:54:18.658772 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.659360 kubelet[3416]: E0905 23:54:18.659140 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.662457 kubelet[3416]: E0905 23:54:18.660938 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.662457 kubelet[3416]: W0905 23:54:18.661009 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.663687 kubelet[3416]: E0905 23:54:18.661447 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.665486 kubelet[3416]: E0905 23:54:18.665016 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.665486 kubelet[3416]: W0905 23:54:18.665051 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.669198 kubelet[3416]: E0905 23:54:18.668640 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.669198 kubelet[3416]: W0905 23:54:18.668676 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.671030 kubelet[3416]: E0905 23:54:18.670686 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.671030 kubelet[3416]: W0905 23:54:18.670856 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.672915 kubelet[3416]: E0905 23:54:18.672151 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.674253 kubelet[3416]: E0905 23:54:18.673803 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.676508 kubelet[3416]: W0905 23:54:18.674808 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.677907 kubelet[3416]: E0905 23:54:18.675206 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.677907 kubelet[3416]: E0905 23:54:18.673939 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.677907 kubelet[3416]: E0905 23:54:18.677817 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.679673 kubelet[3416]: E0905 23:54:18.679092 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.679673 kubelet[3416]: W0905 23:54:18.679127 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.679673 kubelet[3416]: E0905 23:54:18.679171 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.681445 kubelet[3416]: E0905 23:54:18.681185 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.681445 kubelet[3416]: W0905 23:54:18.681221 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.681445 kubelet[3416]: E0905 23:54:18.681354 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.688517 kubelet[3416]: E0905 23:54:18.687395 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.688517 kubelet[3416]: W0905 23:54:18.687431 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.690318 kubelet[3416]: E0905 23:54:18.689384 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.691103 kubelet[3416]: E0905 23:54:18.690793 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.691103 kubelet[3416]: W0905 23:54:18.690831 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.692368 kubelet[3416]: E0905 23:54:18.691903 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.694381 kubelet[3416]: E0905 23:54:18.693613 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.694381 kubelet[3416]: W0905 23:54:18.693650 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.694381 kubelet[3416]: E0905 23:54:18.693685 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.697372 kubelet[3416]: E0905 23:54:18.696956 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.697372 kubelet[3416]: W0905 23:54:18.696992 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.697372 kubelet[3416]: E0905 23:54:18.697025 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.728010 systemd[1]: run-containerd-runc-k8s.io-43640b61488562bfae03d68c190e763a238000bb063436dae5ea59c827de1dac-runc.e0WMYL.mount: Deactivated successfully. Sep 5 23:54:18.763804 containerd[2163]: time="2025-09-05T23:54:18.762860051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:18.764363 containerd[2163]: time="2025-09-05T23:54:18.763523267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:18.766557 containerd[2163]: time="2025-09-05T23:54:18.766015835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:18.767478 containerd[2163]: time="2025-09-05T23:54:18.767241455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:18.769536 kubelet[3416]: E0905 23:54:18.769062 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.769536 kubelet[3416]: W0905 23:54:18.769127 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.770573 kubelet[3416]: E0905 23:54:18.769893 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.776640 kubelet[3416]: E0905 23:54:18.776600 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.781369 kubelet[3416]: W0905 23:54:18.776813 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.785114 kubelet[3416]: E0905 23:54:18.784530 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.791797 kubelet[3416]: E0905 23:54:18.790287 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.791797 kubelet[3416]: W0905 23:54:18.790351 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.791797 kubelet[3416]: E0905 23:54:18.790386 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.793088 kubelet[3416]: E0905 23:54:18.792800 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.793088 kubelet[3416]: W0905 23:54:18.792839 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.793088 kubelet[3416]: E0905 23:54:18.792873 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.797167 kubelet[3416]: E0905 23:54:18.795825 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.797167 kubelet[3416]: W0905 23:54:18.795891 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.797167 kubelet[3416]: E0905 23:54:18.795926 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.804799 kubelet[3416]: E0905 23:54:18.802853 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.804799 kubelet[3416]: W0905 23:54:18.802904 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.804799 kubelet[3416]: E0905 23:54:18.802940 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.809512 kubelet[3416]: E0905 23:54:18.808356 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:18.810586 kubelet[3416]: W0905 23:54:18.810519 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:18.811129 kubelet[3416]: E0905 23:54:18.811071 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:18.983230 containerd[2163]: time="2025-09-05T23:54:18.983164944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qt7g8,Uid:e2f8ab59-c7e8-495b-8698-36a395c5cfeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed2842b0f5f3d786555c6fa3a8c725873cb7bb42329549cbc28a2fc67f79e549\"" Sep 5 23:54:19.883329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975048835.mount: Deactivated successfully. Sep 5 23:54:20.009625 kubelet[3416]: E0905 23:54:20.007777 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfwvm" podUID="27939f7c-5277-453f-aea0-098e23380a31" Sep 5 23:54:20.724414 containerd[2163]: time="2025-09-05T23:54:20.724354897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:20.726499 containerd[2163]: time="2025-09-05T23:54:20.726062521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 23:54:20.726773 containerd[2163]: time="2025-09-05T23:54:20.726615049Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:20.731123 containerd[2163]: time="2025-09-05T23:54:20.731016277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:20.733235 containerd[2163]: time="2025-09-05T23:54:20.732914509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.421975516s" Sep 5 23:54:20.733235 containerd[2163]: time="2025-09-05T23:54:20.732984217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:54:20.739787 containerd[2163]: time="2025-09-05T23:54:20.739399429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:54:20.777251 containerd[2163]: time="2025-09-05T23:54:20.777177973Z" level=info msg="CreateContainer within sandbox \"43640b61488562bfae03d68c190e763a238000bb063436dae5ea59c827de1dac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:54:20.800429 containerd[2163]: time="2025-09-05T23:54:20.800328697Z" level=info msg="CreateContainer within sandbox \"43640b61488562bfae03d68c190e763a238000bb063436dae5ea59c827de1dac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cce85b3d78fd0249c2026e64fa52c4aa68874993e3d40c5766f2ff2415dcbe8c\"" Sep 5 23:54:20.802928 containerd[2163]: time="2025-09-05T23:54:20.801499813Z" level=info msg="StartContainer for \"cce85b3d78fd0249c2026e64fa52c4aa68874993e3d40c5766f2ff2415dcbe8c\"" Sep 5 23:54:20.931444 containerd[2163]: time="2025-09-05T23:54:20.931384250Z" level=info msg="StartContainer for \"cce85b3d78fd0249c2026e64fa52c4aa68874993e3d40c5766f2ff2415dcbe8c\" returns successfully" Sep 5 23:54:21.290530 kubelet[3416]: E0905 23:54:21.290054 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.294154 kubelet[3416]: W0905 23:54:21.293918 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.294154 kubelet[3416]: E0905 23:54:21.294023 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.297722 kubelet[3416]: E0905 23:54:21.296672 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.297722 kubelet[3416]: W0905 23:54:21.296712 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.297722 kubelet[3416]: E0905 23:54:21.297568 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.302054 kubelet[3416]: E0905 23:54:21.301998 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.302054 kubelet[3416]: W0905 23:54:21.302036 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.302422 kubelet[3416]: E0905 23:54:21.302072 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.302934 kubelet[3416]: E0905 23:54:21.302792 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.302934 kubelet[3416]: W0905 23:54:21.302819 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.302934 kubelet[3416]: E0905 23:54:21.302849 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.304332 kubelet[3416]: E0905 23:54:21.303825 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.304332 kubelet[3416]: W0905 23:54:21.303856 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.304332 kubelet[3416]: E0905 23:54:21.303889 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.306553 kubelet[3416]: E0905 23:54:21.304657 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.306553 kubelet[3416]: W0905 23:54:21.304697 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.306553 kubelet[3416]: E0905 23:54:21.304732 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.306553 kubelet[3416]: E0905 23:54:21.305868 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.306553 kubelet[3416]: W0905 23:54:21.305899 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.306553 kubelet[3416]: E0905 23:54:21.305931 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.310501 kubelet[3416]: E0905 23:54:21.306810 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.310501 kubelet[3416]: W0905 23:54:21.306837 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.310501 kubelet[3416]: E0905 23:54:21.306868 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.310501 kubelet[3416]: E0905 23:54:21.308332 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.310501 kubelet[3416]: W0905 23:54:21.308365 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.310501 kubelet[3416]: E0905 23:54:21.308398 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.310501 kubelet[3416]: E0905 23:54:21.308920 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.310501 kubelet[3416]: W0905 23:54:21.308953 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.310501 kubelet[3416]: E0905 23:54:21.308985 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.310501 kubelet[3416]: E0905 23:54:21.309445 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.315709 kubelet[3416]: W0905 23:54:21.309531 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.315709 kubelet[3416]: E0905 23:54:21.309567 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.315709 kubelet[3416]: E0905 23:54:21.310059 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.315709 kubelet[3416]: W0905 23:54:21.310088 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.315709 kubelet[3416]: E0905 23:54:21.310116 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.315709 kubelet[3416]: E0905 23:54:21.310599 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.315709 kubelet[3416]: W0905 23:54:21.310621 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.315709 kubelet[3416]: E0905 23:54:21.310649 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.315709 kubelet[3416]: E0905 23:54:21.310962 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.315709 kubelet[3416]: W0905 23:54:21.310979 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.318124 kubelet[3416]: E0905 23:54:21.310998 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.318124 kubelet[3416]: E0905 23:54:21.311278 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.318124 kubelet[3416]: W0905 23:54:21.311294 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.318124 kubelet[3416]: E0905 23:54:21.311314 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.318124 kubelet[3416]: E0905 23:54:21.311791 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.318124 kubelet[3416]: W0905 23:54:21.311812 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.318124 kubelet[3416]: E0905 23:54:21.311835 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.318124 kubelet[3416]: E0905 23:54:21.312180 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.318124 kubelet[3416]: W0905 23:54:21.312200 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.318124 kubelet[3416]: E0905 23:54:21.312222 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.321744 kubelet[3416]: E0905 23:54:21.312682 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.321744 kubelet[3416]: W0905 23:54:21.312707 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.321744 kubelet[3416]: E0905 23:54:21.312735 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.321744 kubelet[3416]: E0905 23:54:21.313307 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.321744 kubelet[3416]: W0905 23:54:21.313332 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.321744 kubelet[3416]: E0905 23:54:21.313361 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.321744 kubelet[3416]: E0905 23:54:21.314082 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.321744 kubelet[3416]: W0905 23:54:21.314109 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.321744 kubelet[3416]: E0905 23:54:21.314152 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.321744 kubelet[3416]: E0905 23:54:21.315336 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.327835 kubelet[3416]: W0905 23:54:21.315365 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.327835 kubelet[3416]: E0905 23:54:21.315420 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.327835 kubelet[3416]: E0905 23:54:21.316203 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.327835 kubelet[3416]: W0905 23:54:21.316231 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.327835 kubelet[3416]: E0905 23:54:21.316262 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.327835 kubelet[3416]: E0905 23:54:21.327707 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.327835 kubelet[3416]: W0905 23:54:21.327740 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.330823 kubelet[3416]: E0905 23:54:21.330779 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.332705 kubelet[3416]: E0905 23:54:21.332353 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.332705 kubelet[3416]: W0905 23:54:21.332394 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.332705 kubelet[3416]: E0905 23:54:21.332504 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.334227 kubelet[3416]: E0905 23:54:21.334163 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.334227 kubelet[3416]: W0905 23:54:21.334206 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.335851 kubelet[3416]: E0905 23:54:21.334253 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.336686 kubelet[3416]: E0905 23:54:21.336623 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.336686 kubelet[3416]: W0905 23:54:21.336667 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.338699 kubelet[3416]: E0905 23:54:21.336785 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.339028 kubelet[3416]: E0905 23:54:21.338982 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.339028 kubelet[3416]: W0905 23:54:21.339023 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.339355 kubelet[3416]: E0905 23:54:21.339072 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.341903 kubelet[3416]: E0905 23:54:21.341852 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.341903 kubelet[3416]: W0905 23:54:21.341891 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.342176 kubelet[3416]: E0905 23:54:21.342116 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.342308 kubelet[3416]: E0905 23:54:21.342276 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.342372 kubelet[3416]: W0905 23:54:21.342309 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.342372 kubelet[3416]: E0905 23:54:21.342353 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.344449 kubelet[3416]: E0905 23:54:21.344399 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.344449 kubelet[3416]: W0905 23:54:21.344438 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.345435 kubelet[3416]: E0905 23:54:21.345370 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.348031 kubelet[3416]: E0905 23:54:21.347970 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.348031 kubelet[3416]: W0905 23:54:21.348016 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.348231 kubelet[3416]: E0905 23:54:21.348067 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.350622 kubelet[3416]: E0905 23:54:21.350569 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.350622 kubelet[3416]: W0905 23:54:21.350610 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.351827 kubelet[3416]: E0905 23:54:21.351752 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.352317 kubelet[3416]: E0905 23:54:21.352261 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:21.352317 kubelet[3416]: W0905 23:54:21.352302 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:21.353433 kubelet[3416]: E0905 23:54:21.352337 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:21.384589 kubelet[3416]: I0905 23:54:21.384437 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d74799dd9-q68nt" podStartSLOduration=1.9574255680000001 podStartE2EDuration="4.384409848s" podCreationTimestamp="2025-09-05 23:54:17 +0000 UTC" firstStartedPulling="2025-09-05 23:54:18.310173525 +0000 UTC m=+31.715022867" lastFinishedPulling="2025-09-05 23:54:20.737157817 +0000 UTC m=+34.142007147" observedRunningTime="2025-09-05 23:54:21.383971848 +0000 UTC m=+34.788821202" watchObservedRunningTime="2025-09-05 23:54:21.384409848 +0000 UTC m=+34.789259202" Sep 5 23:54:22.007890 kubelet[3416]: E0905 23:54:22.007317 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfwvm" podUID="27939f7c-5277-453f-aea0-098e23380a31" Sep 5 23:54:22.187786 containerd[2163]: time="2025-09-05T23:54:22.187726728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:22.190080 containerd[2163]: time="2025-09-05T23:54:22.189667020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 23:54:22.190080 containerd[2163]: time="2025-09-05T23:54:22.189997896Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:22.197000 containerd[2163]: time="2025-09-05T23:54:22.196197720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:22.198041 containerd[2163]: time="2025-09-05T23:54:22.197956116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.458397243s" Sep 5 23:54:22.198041 containerd[2163]: time="2025-09-05T23:54:22.198032148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:54:22.205571 containerd[2163]: time="2025-09-05T23:54:22.205397856Z" level=info msg="CreateContainer within sandbox \"ed2842b0f5f3d786555c6fa3a8c725873cb7bb42329549cbc28a2fc67f79e549\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:54:22.230535 containerd[2163]: time="2025-09-05T23:54:22.229748124Z" level=info msg="CreateContainer within sandbox \"ed2842b0f5f3d786555c6fa3a8c725873cb7bb42329549cbc28a2fc67f79e549\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ecdd6ecf0e47d9da098a27666e98a19e2a45f81b452033c0f5438f242a9fdaa8\"" Sep 5 23:54:22.234576 containerd[2163]: time="2025-09-05T23:54:22.232513500Z" level=info msg="StartContainer for \"ecdd6ecf0e47d9da098a27666e98a19e2a45f81b452033c0f5438f242a9fdaa8\"" Sep 5 23:54:22.328071 kubelet[3416]: E0905 23:54:22.326353 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.328071 kubelet[3416]: W0905 23:54:22.327049 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.328071 kubelet[3416]: E0905 23:54:22.327105 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.334278 kubelet[3416]: E0905 23:54:22.334236 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.334646 kubelet[3416]: W0905 23:54:22.334493 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.334646 kubelet[3416]: E0905 23:54:22.334543 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.345543 kubelet[3416]: E0905 23:54:22.343603 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.345543 kubelet[3416]: W0905 23:54:22.343677 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.345543 kubelet[3416]: E0905 23:54:22.343712 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.353702 kubelet[3416]: E0905 23:54:22.351681 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.353702 kubelet[3416]: W0905 23:54:22.351724 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.353702 kubelet[3416]: E0905 23:54:22.351760 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.357518 kubelet[3416]: E0905 23:54:22.356814 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.357518 kubelet[3416]: W0905 23:54:22.356987 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.357518 kubelet[3416]: E0905 23:54:22.357024 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.362723 kubelet[3416]: E0905 23:54:22.358284 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.362723 kubelet[3416]: W0905 23:54:22.358365 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.362723 kubelet[3416]: E0905 23:54:22.358431 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.362723 kubelet[3416]: E0905 23:54:22.359049 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.362723 kubelet[3416]: W0905 23:54:22.359078 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.362723 kubelet[3416]: E0905 23:54:22.359135 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.362723 kubelet[3416]: E0905 23:54:22.360094 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.362723 kubelet[3416]: W0905 23:54:22.360125 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.362723 kubelet[3416]: E0905 23:54:22.360156 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.362723 kubelet[3416]: E0905 23:54:22.360951 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.363343 kubelet[3416]: W0905 23:54:22.361031 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.363343 kubelet[3416]: E0905 23:54:22.361070 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.365837 kubelet[3416]: E0905 23:54:22.363957 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.365837 kubelet[3416]: W0905 23:54:22.364005 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.365837 kubelet[3416]: E0905 23:54:22.364068 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.365837 kubelet[3416]: E0905 23:54:22.365838 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.366161 kubelet[3416]: W0905 23:54:22.365871 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.366161 kubelet[3416]: E0905 23:54:22.365931 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.368030 kubelet[3416]: E0905 23:54:22.366586 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.368030 kubelet[3416]: W0905 23:54:22.366627 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.368030 kubelet[3416]: E0905 23:54:22.366661 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.368030 kubelet[3416]: E0905 23:54:22.367147 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.368030 kubelet[3416]: W0905 23:54:22.367174 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.368030 kubelet[3416]: E0905 23:54:22.367205 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.368030 kubelet[3416]: E0905 23:54:22.367729 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.368030 kubelet[3416]: W0905 23:54:22.367756 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.368030 kubelet[3416]: E0905 23:54:22.367786 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.368030 kubelet[3416]: E0905 23:54:22.368417 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.369134 kubelet[3416]: W0905 23:54:22.368446 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.369134 kubelet[3416]: E0905 23:54:22.368552 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.369251 kubelet[3416]: E0905 23:54:22.369202 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.369251 kubelet[3416]: W0905 23:54:22.369229 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.369355 kubelet[3416]: E0905 23:54:22.369261 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.374792 kubelet[3416]: E0905 23:54:22.369777 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.374792 kubelet[3416]: W0905 23:54:22.369821 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.374792 kubelet[3416]: E0905 23:54:22.369855 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.374792 kubelet[3416]: E0905 23:54:22.370300 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.374792 kubelet[3416]: W0905 23:54:22.370326 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.374792 kubelet[3416]: E0905 23:54:22.370354 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.374792 kubelet[3416]: E0905 23:54:22.370906 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.374792 kubelet[3416]: W0905 23:54:22.370933 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.374792 kubelet[3416]: E0905 23:54:22.370964 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.374792 kubelet[3416]: E0905 23:54:22.371361 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.375404 kubelet[3416]: W0905 23:54:22.371386 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.375404 kubelet[3416]: E0905 23:54:22.371416 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.375404 kubelet[3416]: E0905 23:54:22.371910 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.375404 kubelet[3416]: W0905 23:54:22.371937 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.375404 kubelet[3416]: E0905 23:54:22.371968 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.375404 kubelet[3416]: E0905 23:54:22.372453 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.375404 kubelet[3416]: W0905 23:54:22.372569 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.375404 kubelet[3416]: E0905 23:54:22.372606 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.375404 kubelet[3416]: E0905 23:54:22.373363 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.375404 kubelet[3416]: W0905 23:54:22.373428 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.381905 kubelet[3416]: E0905 23:54:22.373626 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.381905 kubelet[3416]: E0905 23:54:22.374294 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.381905 kubelet[3416]: W0905 23:54:22.374327 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.381905 kubelet[3416]: E0905 23:54:22.374415 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.381905 kubelet[3416]: E0905 23:54:22.375135 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.381905 kubelet[3416]: W0905 23:54:22.375163 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.381905 kubelet[3416]: E0905 23:54:22.375218 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.381905 kubelet[3416]: E0905 23:54:22.375808 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.381905 kubelet[3416]: W0905 23:54:22.375836 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.381905 kubelet[3416]: E0905 23:54:22.375866 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.382393 kubelet[3416]: E0905 23:54:22.376893 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.382393 kubelet[3416]: W0905 23:54:22.376947 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.382393 kubelet[3416]: E0905 23:54:22.376980 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.382393 kubelet[3416]: E0905 23:54:22.377559 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.382393 kubelet[3416]: W0905 23:54:22.377614 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.382393 kubelet[3416]: E0905 23:54:22.377646 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.382393 kubelet[3416]: E0905 23:54:22.378234 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.382393 kubelet[3416]: W0905 23:54:22.378260 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.382393 kubelet[3416]: E0905 23:54:22.378321 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.382393 kubelet[3416]: E0905 23:54:22.379045 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.385915 kubelet[3416]: W0905 23:54:22.379129 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.385915 kubelet[3416]: E0905 23:54:22.379164 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.385915 kubelet[3416]: E0905 23:54:22.381124 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.385915 kubelet[3416]: W0905 23:54:22.381154 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.385915 kubelet[3416]: E0905 23:54:22.381190 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.390138 kubelet[3416]: E0905 23:54:22.389650 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.390138 kubelet[3416]: W0905 23:54:22.389692 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.390138 kubelet[3416]: E0905 23:54:22.389949 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.390687 kubelet[3416]: E0905 23:54:22.390498 3416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:22.390687 kubelet[3416]: W0905 23:54:22.390529 3416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:22.390687 kubelet[3416]: E0905 23:54:22.390560 3416 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:22.450873 containerd[2163]: time="2025-09-05T23:54:22.450632929Z" level=info msg="StartContainer for \"ecdd6ecf0e47d9da098a27666e98a19e2a45f81b452033c0f5438f242a9fdaa8\" returns successfully" Sep 5 23:54:22.533862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecdd6ecf0e47d9da098a27666e98a19e2a45f81b452033c0f5438f242a9fdaa8-rootfs.mount: Deactivated successfully. Sep 5 23:54:22.900822 containerd[2163]: time="2025-09-05T23:54:22.900679504Z" level=info msg="shim disconnected" id=ecdd6ecf0e47d9da098a27666e98a19e2a45f81b452033c0f5438f242a9fdaa8 namespace=k8s.io Sep 5 23:54:22.901378 containerd[2163]: time="2025-09-05T23:54:22.900785356Z" level=warning msg="cleaning up after shim disconnected" id=ecdd6ecf0e47d9da098a27666e98a19e2a45f81b452033c0f5438f242a9fdaa8 namespace=k8s.io Sep 5 23:54:22.901378 containerd[2163]: time="2025-09-05T23:54:22.901127776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:23.288722 containerd[2163]: time="2025-09-05T23:54:23.288669398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:54:24.007899 kubelet[3416]: E0905 23:54:24.007259 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfwvm" podUID="27939f7c-5277-453f-aea0-098e23380a31" Sep 5 23:54:26.008834 kubelet[3416]: E0905 23:54:26.008705 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lfwvm" podUID="27939f7c-5277-453f-aea0-098e23380a31" Sep 5 23:54:26.421637 containerd[2163]: time="2025-09-05T23:54:26.421025021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:26.422966 containerd[2163]: time="2025-09-05T23:54:26.422905829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:54:26.424509 containerd[2163]: time="2025-09-05T23:54:26.423774377Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:26.429935 containerd[2163]: time="2025-09-05T23:54:26.429852137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:26.432094 containerd[2163]: time="2025-09-05T23:54:26.432023009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.142696791s" Sep 5 23:54:26.432329 containerd[2163]: time="2025-09-05T23:54:26.432286601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:54:26.438794 containerd[2163]: time="2025-09-05T23:54:26.438671765Z" level=info msg="CreateContainer within sandbox \"ed2842b0f5f3d786555c6fa3a8c725873cb7bb42329549cbc28a2fc67f79e549\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:54:26.458575 containerd[2163]: time="2025-09-05T23:54:26.458502665Z" level=info msg="CreateContainer within sandbox \"ed2842b0f5f3d786555c6fa3a8c725873cb7bb42329549cbc28a2fc67f79e549\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dfc2a7c2297375aa00dbadcdad8c58586ecb39a001efee98539eba555343c2d5\"" Sep 5 23:54:26.464516 containerd[2163]: time="2025-09-05T23:54:26.462674021Z" level=info msg="StartContainer for \"dfc2a7c2297375aa00dbadcdad8c58586ecb39a001efee98539eba555343c2d5\"" Sep 5 23:54:26.465388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968669964.mount: Deactivated successfully. Sep 5 23:54:26.592095 containerd[2163]: time="2025-09-05T23:54:26.592015746Z" level=info msg="StartContainer for \"dfc2a7c2297375aa00dbadcdad8c58586ecb39a001efee98539eba555343c2d5\" returns successfully" Sep 5 23:54:27.672951 containerd[2163]: time="2025-09-05T23:54:27.672798979Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:54:27.704985 kubelet[3416]: I0905 23:54:27.704842 3416 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 23:54:27.722640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfc2a7c2297375aa00dbadcdad8c58586ecb39a001efee98539eba555343c2d5-rootfs.mount: Deactivated successfully. Sep 5 23:54:27.820609 kubelet[3416]: I0905 23:54:27.820541 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpq4h\" (UniqueName: \"kubernetes.io/projected/e5c5635c-d655-4227-b462-e9b1f8d42ffd-kube-api-access-rpq4h\") pod \"calico-apiserver-565db755f8-lctqj\" (UID: \"e5c5635c-d655-4227-b462-e9b1f8d42ffd\") " pod="calico-apiserver/calico-apiserver-565db755f8-lctqj" Sep 5 23:54:27.821771 kubelet[3416]: I0905 23:54:27.820618 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8aa21a8f-5d63-4c31-ba62-2b91293e20d2-goldmane-key-pair\") pod \"goldmane-7988f88666-xnppm\" (UID: \"8aa21a8f-5d63-4c31-ba62-2b91293e20d2\") " pod="calico-system/goldmane-7988f88666-xnppm" Sep 5 23:54:27.821771 kubelet[3416]: I0905 23:54:27.820668 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02e4030-fdc9-4a12-bd86-85df0b683a74-config-volume\") pod \"coredns-7c65d6cfc9-m5jqb\" (UID: \"b02e4030-fdc9-4a12-bd86-85df0b683a74\") " pod="kube-system/coredns-7c65d6cfc9-m5jqb" Sep 5 23:54:27.821771 kubelet[3416]: I0905 23:54:27.820707 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-backend-key-pair\") pod \"whisker-7496b667cf-hkdvd\" (UID: \"33374cf0-d22a-463e-a18e-3ad951e90629\") " pod="calico-system/whisker-7496b667cf-hkdvd" Sep 5 23:54:27.821771 kubelet[3416]: I0905 23:54:27.820744 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdq2\" (UniqueName: \"kubernetes.io/projected/33374cf0-d22a-463e-a18e-3ad951e90629-kube-api-access-jwdq2\") pod \"whisker-7496b667cf-hkdvd\" (UID: \"33374cf0-d22a-463e-a18e-3ad951e90629\") " pod="calico-system/whisker-7496b667cf-hkdvd" Sep 5 23:54:27.821771 kubelet[3416]: I0905 23:54:27.820780 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aa21a8f-5d63-4c31-ba62-2b91293e20d2-goldmane-ca-bundle\") pod \"goldmane-7988f88666-xnppm\" (UID: \"8aa21a8f-5d63-4c31-ba62-2b91293e20d2\") " pod="calico-system/goldmane-7988f88666-xnppm" Sep 5 23:54:27.822100 kubelet[3416]: I0905 23:54:27.820840 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrcvj\" (UniqueName: \"kubernetes.io/projected/b9777b13-1b79-4ea7-958f-63691e6fecb7-kube-api-access-jrcvj\") pod \"coredns-7c65d6cfc9-bn4jv\" (UID: \"b9777b13-1b79-4ea7-958f-63691e6fecb7\") " pod="kube-system/coredns-7c65d6cfc9-bn4jv" Sep 5 23:54:27.822100 kubelet[3416]: I0905 23:54:27.820878 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5c5635c-d655-4227-b462-e9b1f8d42ffd-calico-apiserver-certs\") pod \"calico-apiserver-565db755f8-lctqj\" (UID: \"e5c5635c-d655-4227-b462-e9b1f8d42ffd\") " pod="calico-apiserver/calico-apiserver-565db755f8-lctqj" Sep 5 23:54:27.822100 kubelet[3416]: I0905 23:54:27.820931 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa21a8f-5d63-4c31-ba62-2b91293e20d2-config\") pod \"goldmane-7988f88666-xnppm\" (UID: \"8aa21a8f-5d63-4c31-ba62-2b91293e20d2\") " pod="calico-system/goldmane-7988f88666-xnppm" Sep 5 23:54:27.822100 kubelet[3416]: I0905 23:54:27.820974 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2rzw\" (UniqueName: \"kubernetes.io/projected/b02e4030-fdc9-4a12-bd86-85df0b683a74-kube-api-access-r2rzw\") pod \"coredns-7c65d6cfc9-m5jqb\" (UID: \"b02e4030-fdc9-4a12-bd86-85df0b683a74\") " pod="kube-system/coredns-7c65d6cfc9-m5jqb" Sep 5 23:54:27.822100 kubelet[3416]: I0905 23:54:27.821011 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtn6\" (UniqueName: \"kubernetes.io/projected/8aa21a8f-5d63-4c31-ba62-2b91293e20d2-kube-api-access-7rtn6\") pod \"goldmane-7988f88666-xnppm\" (UID: \"8aa21a8f-5d63-4c31-ba62-2b91293e20d2\") " pod="calico-system/goldmane-7988f88666-xnppm" Sep 5 23:54:27.822383 kubelet[3416]: I0905 23:54:27.821049 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9777b13-1b79-4ea7-958f-63691e6fecb7-config-volume\") pod \"coredns-7c65d6cfc9-bn4jv\" (UID: \"b9777b13-1b79-4ea7-958f-63691e6fecb7\") " pod="kube-system/coredns-7c65d6cfc9-bn4jv" Sep 5 23:54:27.822383 kubelet[3416]: I0905 23:54:27.821084 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-ca-bundle\") pod \"whisker-7496b667cf-hkdvd\" (UID: \"33374cf0-d22a-463e-a18e-3ad951e90629\") " pod="calico-system/whisker-7496b667cf-hkdvd" Sep 5 23:54:27.923657 kubelet[3416]: I0905 23:54:27.922119 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c117decb-6235-448f-af92-cc2c7e502ccf-calico-apiserver-certs\") pod \"calico-apiserver-565db755f8-vcd78\" (UID: \"c117decb-6235-448f-af92-cc2c7e502ccf\") " pod="calico-apiserver/calico-apiserver-565db755f8-vcd78" Sep 5 23:54:27.923657 kubelet[3416]: I0905 23:54:27.922208 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgcj\" (UniqueName: \"kubernetes.io/projected/382a846b-2e68-4366-92ad-add3d9374f37-kube-api-access-wdgcj\") pod \"calico-kube-controllers-69b567d4fc-2p74x\" (UID: \"382a846b-2e68-4366-92ad-add3d9374f37\") " pod="calico-system/calico-kube-controllers-69b567d4fc-2p74x" Sep 5 23:54:27.923657 kubelet[3416]: I0905 23:54:27.922271 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dwnf\" (UniqueName: \"kubernetes.io/projected/c117decb-6235-448f-af92-cc2c7e502ccf-kube-api-access-8dwnf\") pod \"calico-apiserver-565db755f8-vcd78\" (UID: \"c117decb-6235-448f-af92-cc2c7e502ccf\") " pod="calico-apiserver/calico-apiserver-565db755f8-vcd78" Sep 5 23:54:27.923657 kubelet[3416]: I0905 23:54:27.923142 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382a846b-2e68-4366-92ad-add3d9374f37-tigera-ca-bundle\") pod \"calico-kube-controllers-69b567d4fc-2p74x\" (UID: \"382a846b-2e68-4366-92ad-add3d9374f37\") " pod="calico-system/calico-kube-controllers-69b567d4fc-2p74x" Sep 5 23:54:28.059657 containerd[2163]: time="2025-09-05T23:54:28.059565413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfwvm,Uid:27939f7c-5277-453f-aea0-098e23380a31,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:28.106921 containerd[2163]: time="2025-09-05T23:54:28.106857425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bn4jv,Uid:b9777b13-1b79-4ea7-958f-63691e6fecb7,Namespace:kube-system,Attempt:0,}" Sep 5 23:54:28.156715 containerd[2163]: time="2025-09-05T23:54:28.156603570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5jqb,Uid:b02e4030-fdc9-4a12-bd86-85df0b683a74,Namespace:kube-system,Attempt:0,}" Sep 5 23:54:28.196914 containerd[2163]: time="2025-09-05T23:54:28.196785870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-lctqj,Uid:e5c5635c-d655-4227-b462-e9b1f8d42ffd,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:54:28.198047 containerd[2163]: time="2025-09-05T23:54:28.197973762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7496b667cf-hkdvd,Uid:33374cf0-d22a-463e-a18e-3ad951e90629,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:28.240264 containerd[2163]: time="2025-09-05T23:54:28.239681646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-vcd78,Uid:c117decb-6235-448f-af92-cc2c7e502ccf,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:54:28.240502 containerd[2163]: time="2025-09-05T23:54:28.240390486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnppm,Uid:8aa21a8f-5d63-4c31-ba62-2b91293e20d2,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:28.241335 containerd[2163]: time="2025-09-05T23:54:28.240857430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b567d4fc-2p74x,Uid:382a846b-2e68-4366-92ad-add3d9374f37,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:28.252818 containerd[2163]: time="2025-09-05T23:54:28.252563814Z" level=info msg="shim disconnected" id=dfc2a7c2297375aa00dbadcdad8c58586ecb39a001efee98539eba555343c2d5 namespace=k8s.io Sep 5 23:54:28.252818 containerd[2163]: time="2025-09-05T23:54:28.252670674Z" level=warning msg="cleaning up after shim disconnected" id=dfc2a7c2297375aa00dbadcdad8c58586ecb39a001efee98539eba555343c2d5 namespace=k8s.io Sep 5 23:54:28.252818 containerd[2163]: time="2025-09-05T23:54:28.252692922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:28.401628 containerd[2163]: time="2025-09-05T23:54:28.401150299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:54:28.829051 containerd[2163]: time="2025-09-05T23:54:28.828974817Z" level=error msg="Failed to destroy network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.834071 containerd[2163]: time="2025-09-05T23:54:28.833414085Z" level=error msg="encountered an error cleaning up failed sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.834071 containerd[2163]: time="2025-09-05T23:54:28.833574417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b567d4fc-2p74x,Uid:382a846b-2e68-4366-92ad-add3d9374f37,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.835738 kubelet[3416]: E0905 23:54:28.835663 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.841850 kubelet[3416]: E0905 23:54:28.835772 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69b567d4fc-2p74x" Sep 5 23:54:28.841850 kubelet[3416]: E0905 23:54:28.835814 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69b567d4fc-2p74x" Sep 5 23:54:28.841850 kubelet[3416]: E0905 23:54:28.835891 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69b567d4fc-2p74x_calico-system(382a846b-2e68-4366-92ad-add3d9374f37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69b567d4fc-2p74x_calico-system(382a846b-2e68-4366-92ad-add3d9374f37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69b567d4fc-2p74x" podUID="382a846b-2e68-4366-92ad-add3d9374f37" Sep 5 23:54:28.841260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9-shm.mount: Deactivated successfully. Sep 5 23:54:28.892856 containerd[2163]: time="2025-09-05T23:54:28.892733181Z" level=error msg="Failed to destroy network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.897626 containerd[2163]: time="2025-09-05T23:54:28.895001277Z" level=error msg="encountered an error cleaning up failed sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.897626 containerd[2163]: time="2025-09-05T23:54:28.895106049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfwvm,Uid:27939f7c-5277-453f-aea0-098e23380a31,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.897957 kubelet[3416]: E0905 23:54:28.897721 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.897957 kubelet[3416]: E0905 23:54:28.897799 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfwvm" Sep 5 23:54:28.897957 kubelet[3416]: E0905 23:54:28.897831 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lfwvm" Sep 5 23:54:28.898720 kubelet[3416]: E0905 23:54:28.898647 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lfwvm_calico-system(27939f7c-5277-453f-aea0-098e23380a31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lfwvm_calico-system(27939f7c-5277-453f-aea0-098e23380a31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lfwvm" podUID="27939f7c-5277-453f-aea0-098e23380a31" Sep 5 23:54:28.902548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640-shm.mount: Deactivated successfully. Sep 5 23:54:28.911776 containerd[2163]: time="2025-09-05T23:54:28.911696505Z" level=error msg="Failed to destroy network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.912314 containerd[2163]: time="2025-09-05T23:54:28.912018705Z" level=error msg="Failed to destroy network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.920049 containerd[2163]: time="2025-09-05T23:54:28.919134897Z" level=error msg="encountered an error cleaning up failed sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.920049 containerd[2163]: time="2025-09-05T23:54:28.919238865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bn4jv,Uid:b9777b13-1b79-4ea7-958f-63691e6fecb7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.921107 kubelet[3416]: E0905 23:54:28.920136 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.921107 kubelet[3416]: E0905 23:54:28.920216 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bn4jv" Sep 5 23:54:28.921107 kubelet[3416]: E0905 23:54:28.920251 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bn4jv" Sep 5 23:54:28.925316 kubelet[3416]: E0905 23:54:28.920309 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bn4jv_kube-system(b9777b13-1b79-4ea7-958f-63691e6fecb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bn4jv_kube-system(b9777b13-1b79-4ea7-958f-63691e6fecb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bn4jv" podUID="b9777b13-1b79-4ea7-958f-63691e6fecb7" Sep 5 23:54:28.928787 containerd[2163]: time="2025-09-05T23:54:28.924283701Z" level=error msg="encountered an error cleaning up failed sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.928787 containerd[2163]: time="2025-09-05T23:54:28.924386457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnppm,Uid:8aa21a8f-5d63-4c31-ba62-2b91293e20d2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.921216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a-shm.mount: Deactivated successfully. Sep 5 23:54:28.930935 kubelet[3416]: E0905 23:54:28.929126 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.930935 kubelet[3416]: E0905 23:54:28.929203 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-xnppm" Sep 5 23:54:28.930935 kubelet[3416]: E0905 23:54:28.929237 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-xnppm" Sep 5 23:54:28.922025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090-shm.mount: Deactivated successfully. Sep 5 23:54:28.931235 kubelet[3416]: E0905 23:54:28.929311 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-xnppm_calico-system(8aa21a8f-5d63-4c31-ba62-2b91293e20d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-xnppm_calico-system(8aa21a8f-5d63-4c31-ba62-2b91293e20d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-xnppm" podUID="8aa21a8f-5d63-4c31-ba62-2b91293e20d2" Sep 5 23:54:28.940521 containerd[2163]: time="2025-09-05T23:54:28.938414398Z" level=error msg="Failed to destroy network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.940521 containerd[2163]: time="2025-09-05T23:54:28.939040654Z" level=error msg="encountered an error cleaning up failed sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.940521 containerd[2163]: time="2025-09-05T23:54:28.939123754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5jqb,Uid:b02e4030-fdc9-4a12-bd86-85df0b683a74,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.940796 kubelet[3416]: E0905 23:54:28.939429 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.941131 kubelet[3416]: E0905 23:54:28.940762 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m5jqb" Sep 5 23:54:28.943840 kubelet[3416]: E0905 23:54:28.941137 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-m5jqb" Sep 5 23:54:28.943840 kubelet[3416]: E0905 23:54:28.941229 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-m5jqb_kube-system(b02e4030-fdc9-4a12-bd86-85df0b683a74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-m5jqb_kube-system(b02e4030-fdc9-4a12-bd86-85df0b683a74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m5jqb" podUID="b02e4030-fdc9-4a12-bd86-85df0b683a74" Sep 5 23:54:28.951005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4-shm.mount: Deactivated successfully. Sep 5 23:54:28.961561 containerd[2163]: time="2025-09-05T23:54:28.961092886Z" level=error msg="Failed to destroy network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.962971 containerd[2163]: time="2025-09-05T23:54:28.962869858Z" level=error msg="encountered an error cleaning up failed sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.963143 containerd[2163]: time="2025-09-05T23:54:28.963023590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-vcd78,Uid:c117decb-6235-448f-af92-cc2c7e502ccf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.963755 kubelet[3416]: E0905 23:54:28.963376 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.963755 kubelet[3416]: E0905 23:54:28.963494 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565db755f8-vcd78" Sep 5 23:54:28.963755 kubelet[3416]: E0905 23:54:28.963533 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565db755f8-vcd78" Sep 5 23:54:28.964063 kubelet[3416]: E0905 23:54:28.963622 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565db755f8-vcd78_calico-apiserver(c117decb-6235-448f-af92-cc2c7e502ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565db755f8-vcd78_calico-apiserver(c117decb-6235-448f-af92-cc2c7e502ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565db755f8-vcd78" podUID="c117decb-6235-448f-af92-cc2c7e502ccf" Sep 5 23:54:28.979734 containerd[2163]: time="2025-09-05T23:54:28.979541230Z" level=error msg="Failed to destroy network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.981453 containerd[2163]: time="2025-09-05T23:54:28.981223474Z" level=error msg="encountered an error cleaning up failed sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.981668 containerd[2163]: time="2025-09-05T23:54:28.981606238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7496b667cf-hkdvd,Uid:33374cf0-d22a-463e-a18e-3ad951e90629,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.982060 containerd[2163]: time="2025-09-05T23:54:28.982017658Z" level=error msg="Failed to destroy network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.982313 kubelet[3416]: E0905 23:54:28.982126 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.982313 kubelet[3416]: E0905 23:54:28.982209 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7496b667cf-hkdvd" Sep 5 23:54:28.982313 kubelet[3416]: E0905 23:54:28.982256 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7496b667cf-hkdvd" Sep 5 23:54:28.984944 kubelet[3416]: E0905 23:54:28.982316 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7496b667cf-hkdvd_calico-system(33374cf0-d22a-463e-a18e-3ad951e90629)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7496b667cf-hkdvd_calico-system(33374cf0-d22a-463e-a18e-3ad951e90629)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7496b667cf-hkdvd" podUID="33374cf0-d22a-463e-a18e-3ad951e90629" Sep 5 23:54:28.985764 containerd[2163]: time="2025-09-05T23:54:28.985255306Z" level=error msg="encountered an error cleaning up failed sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.985764 containerd[2163]: time="2025-09-05T23:54:28.985341574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-lctqj,Uid:e5c5635c-d655-4227-b462-e9b1f8d42ffd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.986802 kubelet[3416]: E0905 23:54:28.986097 3416 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:28.986802 kubelet[3416]: E0905 23:54:28.986351 3416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565db755f8-lctqj" Sep 5 23:54:28.986802 kubelet[3416]: E0905 23:54:28.986394 3416 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565db755f8-lctqj" Sep 5 23:54:28.987856 kubelet[3416]: E0905 23:54:28.986880 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565db755f8-lctqj_calico-apiserver(e5c5635c-d655-4227-b462-e9b1f8d42ffd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565db755f8-lctqj_calico-apiserver(e5c5635c-d655-4227-b462-e9b1f8d42ffd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565db755f8-lctqj" podUID="e5c5635c-d655-4227-b462-e9b1f8d42ffd" Sep 5 23:54:29.340192 kubelet[3416]: I0905 23:54:29.340133 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:29.344195 containerd[2163]: time="2025-09-05T23:54:29.341818352Z" level=info msg="StopPodSandbox for \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\"" Sep 5 23:54:29.344195 containerd[2163]: time="2025-09-05T23:54:29.342129956Z" level=info msg="Ensure that sandbox 6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074 in task-service has been cleanup successfully" Sep 5 23:54:29.344594 kubelet[3416]: I0905 23:54:29.343617 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:29.345516 containerd[2163]: time="2025-09-05T23:54:29.344772860Z" level=info msg="StopPodSandbox for \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\"" Sep 5 23:54:29.345516 containerd[2163]: time="2025-09-05T23:54:29.345092048Z" level=info msg="Ensure that sandbox 07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9 in task-service has been cleanup successfully" Sep 5 23:54:29.364755 kubelet[3416]: I0905 23:54:29.364704 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:54:29.367338 containerd[2163]: time="2025-09-05T23:54:29.367162160Z" level=info msg="StopPodSandbox for \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\"" Sep 5 23:54:29.368913 containerd[2163]: time="2025-09-05T23:54:29.368843120Z" level=info msg="Ensure that sandbox 707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640 in task-service has been cleanup successfully" Sep 5 23:54:29.374846 kubelet[3416]: I0905 23:54:29.374693 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:29.377649 containerd[2163]: time="2025-09-05T23:54:29.377156480Z" level=info msg="StopPodSandbox for \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\"" Sep 5 23:54:29.378917 containerd[2163]: time="2025-09-05T23:54:29.378694364Z" level=info msg="Ensure that sandbox 84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62 in task-service has been cleanup successfully" Sep 5 23:54:29.381114 kubelet[3416]: I0905 23:54:29.381039 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:54:29.390209 containerd[2163]: time="2025-09-05T23:54:29.389572316Z" level=info msg="StopPodSandbox for \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\"" Sep 5 23:54:29.391094 containerd[2163]: time="2025-09-05T23:54:29.390913088Z" level=info msg="Ensure that sandbox 4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00 in task-service has been cleanup successfully" Sep 5 23:54:29.398521 kubelet[3416]: I0905 23:54:29.398391 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:54:29.403738 containerd[2163]: time="2025-09-05T23:54:29.403579748Z" level=info msg="StopPodSandbox for \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\"" Sep 5 23:54:29.407304 containerd[2163]: time="2025-09-05T23:54:29.406880960Z" level=info msg="Ensure that sandbox c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4 in task-service has been cleanup successfully" Sep 5 23:54:29.418407 kubelet[3416]: I0905 23:54:29.418331 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:54:29.421281 containerd[2163]: time="2025-09-05T23:54:29.420721352Z" level=info msg="StopPodSandbox for \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\"" Sep 5 23:54:29.421281 containerd[2163]: time="2025-09-05T23:54:29.421052168Z" level=info msg="Ensure that sandbox acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090 in task-service has been cleanup successfully" Sep 5 23:54:29.428840 kubelet[3416]: I0905 23:54:29.428678 3416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:54:29.434815 containerd[2163]: time="2025-09-05T23:54:29.434728064Z" level=info msg="StopPodSandbox for \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\"" Sep 5 23:54:29.439536 containerd[2163]: time="2025-09-05T23:54:29.438440648Z" level=info msg="Ensure that sandbox 6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a in task-service has been cleanup successfully" Sep 5 23:54:29.553506 containerd[2163]: time="2025-09-05T23:54:29.553184409Z" level=error msg="StopPodSandbox for \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\" failed" error="failed to destroy network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.554141 kubelet[3416]: E0905 23:54:29.553809 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:29.557108 kubelet[3416]: E0905 23:54:29.555150 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074"} Sep 5 23:54:29.557108 kubelet[3416]: E0905 23:54:29.555274 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c117decb-6235-448f-af92-cc2c7e502ccf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.557108 kubelet[3416]: E0905 23:54:29.555319 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c117decb-6235-448f-af92-cc2c7e502ccf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565db755f8-vcd78" podUID="c117decb-6235-448f-af92-cc2c7e502ccf" Sep 5 23:54:29.594279 containerd[2163]: time="2025-09-05T23:54:29.594109065Z" level=error msg="StopPodSandbox for \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\" failed" error="failed to destroy network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.595115 kubelet[3416]: E0905 23:54:29.594829 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:29.595754 kubelet[3416]: E0905 23:54:29.595538 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62"} Sep 5 23:54:29.596265 kubelet[3416]: E0905 23:54:29.596108 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33374cf0-d22a-463e-a18e-3ad951e90629\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.596981 kubelet[3416]: E0905 23:54:29.596705 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33374cf0-d22a-463e-a18e-3ad951e90629\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7496b667cf-hkdvd" podUID="33374cf0-d22a-463e-a18e-3ad951e90629" Sep 5 23:54:29.619987 containerd[2163]: time="2025-09-05T23:54:29.619803369Z" level=error msg="StopPodSandbox for \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\" failed" error="failed to destroy network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.620535 kubelet[3416]: E0905 23:54:29.620139 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:54:29.620535 kubelet[3416]: E0905 23:54:29.620228 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090"} Sep 5 23:54:29.620535 kubelet[3416]: E0905 23:54:29.620287 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9777b13-1b79-4ea7-958f-63691e6fecb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.620535 kubelet[3416]: E0905 23:54:29.620328 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9777b13-1b79-4ea7-958f-63691e6fecb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bn4jv" podUID="b9777b13-1b79-4ea7-958f-63691e6fecb7" Sep 5 23:54:29.631147 containerd[2163]: time="2025-09-05T23:54:29.630629229Z" level=error msg="StopPodSandbox for \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\" failed" error="failed to destroy network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.631147 containerd[2163]: time="2025-09-05T23:54:29.630944265Z" level=error msg="StopPodSandbox for \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\" failed" error="failed to destroy network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.631596 kubelet[3416]: E0905 23:54:29.631235 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:29.631596 kubelet[3416]: E0905 23:54:29.631319 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9"} Sep 5 23:54:29.631596 kubelet[3416]: E0905 23:54:29.631376 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"382a846b-2e68-4366-92ad-add3d9374f37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.631596 kubelet[3416]: E0905 23:54:29.631430 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"382a846b-2e68-4366-92ad-add3d9374f37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69b567d4fc-2p74x" podUID="382a846b-2e68-4366-92ad-add3d9374f37" Sep 5 23:54:29.633157 kubelet[3416]: E0905 23:54:29.631235 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:54:29.633157 kubelet[3416]: E0905 23:54:29.632536 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4"} Sep 5 23:54:29.633157 kubelet[3416]: E0905 23:54:29.632662 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b02e4030-fdc9-4a12-bd86-85df0b683a74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.633157 kubelet[3416]: E0905 23:54:29.632705 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b02e4030-fdc9-4a12-bd86-85df0b683a74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-m5jqb" podUID="b02e4030-fdc9-4a12-bd86-85df0b683a74" Sep 5 23:54:29.643011 containerd[2163]: time="2025-09-05T23:54:29.642323493Z" level=error msg="StopPodSandbox for \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\" failed" error="failed to destroy network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.643172 kubelet[3416]: E0905 23:54:29.643096 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:54:29.643273 kubelet[3416]: E0905 23:54:29.643170 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640"} Sep 5 23:54:29.643273 kubelet[3416]: E0905 23:54:29.643224 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27939f7c-5277-453f-aea0-098e23380a31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.643731 kubelet[3416]: E0905 23:54:29.643267 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27939f7c-5277-453f-aea0-098e23380a31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lfwvm" podUID="27939f7c-5277-453f-aea0-098e23380a31" Sep 5 23:54:29.645200 containerd[2163]: time="2025-09-05T23:54:29.644883933Z" level=error msg="StopPodSandbox for \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\" failed" error="failed to destroy network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.646158 kubelet[3416]: E0905 23:54:29.645920 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:54:29.646158 kubelet[3416]: E0905 23:54:29.645993 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a"} Sep 5 23:54:29.646158 kubelet[3416]: E0905 23:54:29.646050 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8aa21a8f-5d63-4c31-ba62-2b91293e20d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.646158 kubelet[3416]: E0905 23:54:29.646094 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8aa21a8f-5d63-4c31-ba62-2b91293e20d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-xnppm" podUID="8aa21a8f-5d63-4c31-ba62-2b91293e20d2" Sep 5 23:54:29.648273 containerd[2163]: time="2025-09-05T23:54:29.648205521Z" level=error msg="StopPodSandbox for \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\" failed" error="failed to destroy network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:29.648876 kubelet[3416]: E0905 23:54:29.648794 3416 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:54:29.649258 kubelet[3416]: E0905 23:54:29.649191 3416 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00"} Sep 5 23:54:29.649441 kubelet[3416]: E0905 23:54:29.649402 3416 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5c5635c-d655-4227-b462-e9b1f8d42ffd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:29.649666 kubelet[3416]: E0905 23:54:29.649623 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5c5635c-d655-4227-b462-e9b1f8d42ffd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565db755f8-lctqj" podUID="e5c5635c-d655-4227-b462-e9b1f8d42ffd" Sep 5 23:54:29.724108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074-shm.mount: Deactivated successfully. Sep 5 23:54:29.724855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00-shm.mount: Deactivated successfully. Sep 5 23:54:29.726025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62-shm.mount: Deactivated successfully. Sep 5 23:54:35.086150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972451109.mount: Deactivated successfully. Sep 5 23:54:35.145359 containerd[2163]: time="2025-09-05T23:54:35.144660516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:35.146544 containerd[2163]: time="2025-09-05T23:54:35.146438196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:54:35.149267 containerd[2163]: time="2025-09-05T23:54:35.149135688Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:35.155910 containerd[2163]: time="2025-09-05T23:54:35.155770824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:35.157697 containerd[2163]: time="2025-09-05T23:54:35.157597380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.756335925s" Sep 5 23:54:35.157697 containerd[2163]: time="2025-09-05T23:54:35.157681968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:54:35.203132 containerd[2163]: time="2025-09-05T23:54:35.202915825Z" level=info msg="CreateContainer within sandbox \"ed2842b0f5f3d786555c6fa3a8c725873cb7bb42329549cbc28a2fc67f79e549\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:54:35.246500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213166512.mount: Deactivated successfully. Sep 5 23:54:35.252319 containerd[2163]: time="2025-09-05T23:54:35.252223825Z" level=info msg="CreateContainer within sandbox \"ed2842b0f5f3d786555c6fa3a8c725873cb7bb42329549cbc28a2fc67f79e549\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3873b499b523f9de0471c3498d9dc2a1ca5304605c3f1db354bc107553b08d7\"" Sep 5 23:54:35.254560 containerd[2163]: time="2025-09-05T23:54:35.253646629Z" level=info msg="StartContainer for \"a3873b499b523f9de0471c3498d9dc2a1ca5304605c3f1db354bc107553b08d7\"" Sep 5 23:54:35.389776 containerd[2163]: time="2025-09-05T23:54:35.388548758Z" level=info msg="StartContainer for \"a3873b499b523f9de0471c3498d9dc2a1ca5304605c3f1db354bc107553b08d7\" returns successfully" Sep 5 23:54:35.781889 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:54:35.782650 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:54:35.995939 kubelet[3416]: I0905 23:54:35.991178 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qt7g8" podStartSLOduration=2.819169465 podStartE2EDuration="18.991143833s" podCreationTimestamp="2025-09-05 23:54:17 +0000 UTC" firstStartedPulling="2025-09-05 23:54:18.990079764 +0000 UTC m=+32.394929106" lastFinishedPulling="2025-09-05 23:54:35.162054132 +0000 UTC m=+48.566903474" observedRunningTime="2025-09-05 23:54:35.513067898 +0000 UTC m=+48.917917312" watchObservedRunningTime="2025-09-05 23:54:35.991143833 +0000 UTC m=+49.395993175" Sep 5 23:54:35.998871 containerd[2163]: time="2025-09-05T23:54:35.998788313Z" level=info msg="StopPodSandbox for \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\"" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.281 [INFO][4934] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.281 [INFO][4934] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" iface="eth0" netns="/var/run/netns/cni-a9ec1f19-0461-2658-94b7-199e532e464a" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.282 [INFO][4934] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" iface="eth0" netns="/var/run/netns/cni-a9ec1f19-0461-2658-94b7-199e532e464a" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.288 [INFO][4934] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" iface="eth0" netns="/var/run/netns/cni-a9ec1f19-0461-2658-94b7-199e532e464a" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.288 [INFO][4934] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.288 [INFO][4934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.401 [INFO][4948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.401 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.401 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.422 [WARNING][4948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.423 [INFO][4948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.429 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:36.441393 containerd[2163]: 2025-09-05 23:54:36.435 [INFO][4934] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:36.446929 containerd[2163]: time="2025-09-05T23:54:36.446656863Z" level=info msg="TearDown network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\" successfully" Sep 5 23:54:36.446929 containerd[2163]: time="2025-09-05T23:54:36.446720283Z" level=info msg="StopPodSandbox for \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\" returns successfully" Sep 5 23:54:36.449330 systemd[1]: run-netns-cni\x2da9ec1f19\x2d0461\x2d2658\x2d94b7\x2d199e532e464a.mount: Deactivated successfully. Sep 5 23:54:36.524683 kubelet[3416]: I0905 23:54:36.523248 3416 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-ca-bundle\") pod \"33374cf0-d22a-463e-a18e-3ad951e90629\" (UID: \"33374cf0-d22a-463e-a18e-3ad951e90629\") " Sep 5 23:54:36.524683 kubelet[3416]: I0905 23:54:36.523650 3416 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwdq2\" (UniqueName: \"kubernetes.io/projected/33374cf0-d22a-463e-a18e-3ad951e90629-kube-api-access-jwdq2\") pod \"33374cf0-d22a-463e-a18e-3ad951e90629\" (UID: \"33374cf0-d22a-463e-a18e-3ad951e90629\") " Sep 5 23:54:36.526530 kubelet[3416]: I0905 23:54:36.526242 3416 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "33374cf0-d22a-463e-a18e-3ad951e90629" (UID: "33374cf0-d22a-463e-a18e-3ad951e90629"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 23:54:36.538866 kubelet[3416]: I0905 23:54:36.538740 3416 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33374cf0-d22a-463e-a18e-3ad951e90629-kube-api-access-jwdq2" (OuterVolumeSpecName: "kube-api-access-jwdq2") pod "33374cf0-d22a-463e-a18e-3ad951e90629" (UID: "33374cf0-d22a-463e-a18e-3ad951e90629"). InnerVolumeSpecName "kube-api-access-jwdq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 23:54:36.546359 systemd[1]: var-lib-kubelet-pods-33374cf0\x2dd22a\x2d463e\x2da18e\x2d3ad951e90629-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwdq2.mount: Deactivated successfully. Sep 5 23:54:36.625969 kubelet[3416]: I0905 23:54:36.625894 3416 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-backend-key-pair\") pod \"33374cf0-d22a-463e-a18e-3ad951e90629\" (UID: \"33374cf0-d22a-463e-a18e-3ad951e90629\") " Sep 5 23:54:36.626113 kubelet[3416]: I0905 23:54:36.626059 3416 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-ca-bundle\") on node \"ip-172-31-22-93\" DevicePath \"\"" Sep 5 23:54:36.626113 kubelet[3416]: I0905 23:54:36.626089 3416 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwdq2\" (UniqueName: \"kubernetes.io/projected/33374cf0-d22a-463e-a18e-3ad951e90629-kube-api-access-jwdq2\") on node \"ip-172-31-22-93\" DevicePath \"\"" Sep 5 23:54:36.644898 kubelet[3416]: I0905 23:54:36.644824 3416 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "33374cf0-d22a-463e-a18e-3ad951e90629" (UID: "33374cf0-d22a-463e-a18e-3ad951e90629"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 23:54:36.645592 systemd[1]: var-lib-kubelet-pods-33374cf0\x2dd22a\x2d463e\x2da18e\x2d3ad951e90629-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:54:36.726658 kubelet[3416]: I0905 23:54:36.726557 3416 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33374cf0-d22a-463e-a18e-3ad951e90629-whisker-backend-key-pair\") on node \"ip-172-31-22-93\" DevicePath \"\"" Sep 5 23:54:36.928015 kubelet[3416]: I0905 23:54:36.927950 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38a45510-8a84-4403-8d41-d4690c6d6bd1-whisker-backend-key-pair\") pod \"whisker-77f4776f4-qsqhg\" (UID: \"38a45510-8a84-4403-8d41-d4690c6d6bd1\") " pod="calico-system/whisker-77f4776f4-qsqhg" Sep 5 23:54:36.929532 kubelet[3416]: I0905 23:54:36.928192 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a45510-8a84-4403-8d41-d4690c6d6bd1-whisker-ca-bundle\") pod \"whisker-77f4776f4-qsqhg\" (UID: \"38a45510-8a84-4403-8d41-d4690c6d6bd1\") " pod="calico-system/whisker-77f4776f4-qsqhg" Sep 5 23:54:36.929532 kubelet[3416]: I0905 23:54:36.928636 3416 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdv9\" (UniqueName: \"kubernetes.io/projected/38a45510-8a84-4403-8d41-d4690c6d6bd1-kube-api-access-2jdv9\") pod \"whisker-77f4776f4-qsqhg\" (UID: \"38a45510-8a84-4403-8d41-d4690c6d6bd1\") " pod="calico-system/whisker-77f4776f4-qsqhg" Sep 5 23:54:37.020901 kubelet[3416]: I0905 23:54:37.020718 3416 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33374cf0-d22a-463e-a18e-3ad951e90629" path="/var/lib/kubelet/pods/33374cf0-d22a-463e-a18e-3ad951e90629/volumes" Sep 5 23:54:37.203450 containerd[2163]: time="2025-09-05T23:54:37.203316555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77f4776f4-qsqhg,Uid:38a45510-8a84-4403-8d41-d4690c6d6bd1,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:37.442405 (udev-worker)[4920]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:54:37.444630 systemd-networkd[1690]: cali61a075d13e6: Link UP Sep 5 23:54:37.447971 systemd-networkd[1690]: cali61a075d13e6: Gained carrier Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.287 [INFO][4993] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.308 [INFO][4993] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0 whisker-77f4776f4- calico-system 38a45510-8a84-4403-8d41-d4690c6d6bd1 935 0 2025-09-05 23:54:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77f4776f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-93 whisker-77f4776f4-qsqhg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali61a075d13e6 [] [] }} ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.308 [INFO][4993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.367 [INFO][5005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" HandleID="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Workload="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.367 [INFO][5005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" HandleID="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Workload="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a63e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-93", "pod":"whisker-77f4776f4-qsqhg", "timestamp":"2025-09-05 23:54:37.367551147 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.367 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.368 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.368 [INFO][5005] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.383 [INFO][5005] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.392 [INFO][5005] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.400 [INFO][5005] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.403 [INFO][5005] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.407 [INFO][5005] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.407 [INFO][5005] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.410 [INFO][5005] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.417 [INFO][5005] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.425 [INFO][5005] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.1/26] block=192.168.80.0/26 handle="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.426 [INFO][5005] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.1/26] handle="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" host="ip-172-31-22-93" Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.426 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:37.482784 containerd[2163]: 2025-09-05 23:54:37.426 [INFO][5005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.1/26] IPv6=[] ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" HandleID="k8s-pod-network.9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Workload="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" Sep 5 23:54:37.485979 containerd[2163]: 2025-09-05 23:54:37.429 [INFO][4993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0", GenerateName:"whisker-77f4776f4-", Namespace:"calico-system", SelfLink:"", UID:"38a45510-8a84-4403-8d41-d4690c6d6bd1", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77f4776f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"whisker-77f4776f4-qsqhg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali61a075d13e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:37.485979 containerd[2163]: 2025-09-05 23:54:37.429 [INFO][4993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.1/32] ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" Sep 5 23:54:37.485979 containerd[2163]: 2025-09-05 23:54:37.429 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61a075d13e6 ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" Sep 5 23:54:37.485979 containerd[2163]: 2025-09-05 23:54:37.448 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" Sep 5 23:54:37.485979 containerd[2163]: 2025-09-05 23:54:37.449 [INFO][4993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0", GenerateName:"whisker-77f4776f4-", Namespace:"calico-system", SelfLink:"", UID:"38a45510-8a84-4403-8d41-d4690c6d6bd1", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77f4776f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e", Pod:"whisker-77f4776f4-qsqhg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali61a075d13e6", MAC:"d2:9d:37:0c:b8:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:37.485979 containerd[2163]: 2025-09-05 23:54:37.477 [INFO][4993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e" Namespace="calico-system" Pod="whisker-77f4776f4-qsqhg" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--77f4776f4--qsqhg-eth0" Sep 5 23:54:37.578720 containerd[2163]: time="2025-09-05T23:54:37.577432456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:37.578720 containerd[2163]: time="2025-09-05T23:54:37.577583032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:37.578720 containerd[2163]: time="2025-09-05T23:54:37.577638052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:37.578720 containerd[2163]: time="2025-09-05T23:54:37.577815904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:37.867295 containerd[2163]: time="2025-09-05T23:54:37.867219654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77f4776f4-qsqhg,Uid:38a45510-8a84-4403-8d41-d4690c6d6bd1,Namespace:calico-system,Attempt:0,} returns sandbox id \"9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e\"" Sep 5 23:54:37.896962 containerd[2163]: time="2025-09-05T23:54:37.896324514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:54:38.794530 kernel: bpftool[5203]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:54:38.952800 systemd-networkd[1690]: cali61a075d13e6: Gained IPv6LL Sep 5 23:54:39.276756 systemd-networkd[1690]: vxlan.calico: Link UP Sep 5 23:54:39.276773 systemd-networkd[1690]: vxlan.calico: Gained carrier Sep 5 23:54:39.363413 (udev-worker)[4918]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:54:39.409627 containerd[2163]: time="2025-09-05T23:54:39.408971838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:39.413695 containerd[2163]: time="2025-09-05T23:54:39.413404566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:54:39.421508 containerd[2163]: time="2025-09-05T23:54:39.421069194Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:39.437315 containerd[2163]: time="2025-09-05T23:54:39.436986462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:39.441775 containerd[2163]: time="2025-09-05T23:54:39.440681154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.544280392s" Sep 5 23:54:39.441775 containerd[2163]: time="2025-09-05T23:54:39.440745882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:54:39.448448 containerd[2163]: time="2025-09-05T23:54:39.448362342Z" level=info msg="CreateContainer within sandbox \"9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:54:39.478502 containerd[2163]: time="2025-09-05T23:54:39.478208958Z" level=info msg="CreateContainer within sandbox \"9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"64cc3b976236ce7a765aa1080982d772fcf468bf414dc44f2dec36ae2c69dd44\"" Sep 5 23:54:39.482670 containerd[2163]: time="2025-09-05T23:54:39.481781082Z" level=info msg="StartContainer for \"64cc3b976236ce7a765aa1080982d772fcf468bf414dc44f2dec36ae2c69dd44\"" Sep 5 23:54:39.745589 containerd[2163]: time="2025-09-05T23:54:39.744845707Z" level=info msg="StartContainer for \"64cc3b976236ce7a765aa1080982d772fcf468bf414dc44f2dec36ae2c69dd44\" returns successfully" Sep 5 23:54:39.752324 containerd[2163]: time="2025-09-05T23:54:39.751899355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:54:40.011984 systemd[1]: Started sshd@7-172.31.22.93:22-139.178.68.195:48782.service - OpenSSH per-connection server daemon (139.178.68.195:48782). Sep 5 23:54:40.210561 sshd[5312]: Accepted publickey for core from 139.178.68.195 port 48782 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:40.213162 sshd[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:40.222132 systemd-logind[2117]: New session 8 of user core. Sep 5 23:54:40.230125 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:54:40.521765 sshd[5312]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:40.531596 systemd-logind[2117]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:54:40.532822 systemd[1]: sshd@7-172.31.22.93:22-139.178.68.195:48782.service: Deactivated successfully. Sep 5 23:54:40.545287 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:54:40.552872 systemd-logind[2117]: Removed session 8. Sep 5 23:54:41.130567 systemd-networkd[1690]: vxlan.calico: Gained IPv6LL Sep 5 23:54:41.999613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090515044.mount: Deactivated successfully. Sep 5 23:54:42.037596 containerd[2163]: time="2025-09-05T23:54:42.037520311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:42.041941 containerd[2163]: time="2025-09-05T23:54:42.040422391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:54:42.041941 containerd[2163]: time="2025-09-05T23:54:42.041539783Z" level=info msg="StopPodSandbox for \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\"" Sep 5 23:54:42.044897 containerd[2163]: time="2025-09-05T23:54:42.043143427Z" level=info msg="StopPodSandbox for \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\"" Sep 5 23:54:42.048733 containerd[2163]: time="2025-09-05T23:54:42.048537091Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:42.066412 containerd[2163]: time="2025-09-05T23:54:42.066328903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:42.071545 containerd[2163]: time="2025-09-05T23:54:42.070162519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.318188584s" Sep 5 23:54:42.071545 containerd[2163]: time="2025-09-05T23:54:42.070243603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:54:42.145447 containerd[2163]: time="2025-09-05T23:54:42.145383415Z" level=info msg="CreateContainer within sandbox \"9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:54:42.234624 containerd[2163]: time="2025-09-05T23:54:42.234243956Z" level=info msg="CreateContainer within sandbox \"9accc8ebeb68db7bec3b7322d44f694bcc58262aa5096e9b2a204197ff248a1e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6e1413cc75b95faec4275e4fca91847facc3d5c6077a516f44e23d935fba1391\"" Sep 5 23:54:42.239500 containerd[2163]: time="2025-09-05T23:54:42.238617212Z" level=info msg="StartContainer for \"6e1413cc75b95faec4275e4fca91847facc3d5c6077a516f44e23d935fba1391\"" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.281 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.284 [INFO][5363] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" iface="eth0" netns="/var/run/netns/cni-5083b258-dd30-de4d-2444-530009efbb4d" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.285 [INFO][5363] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" iface="eth0" netns="/var/run/netns/cni-5083b258-dd30-de4d-2444-530009efbb4d" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.287 [INFO][5363] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" iface="eth0" netns="/var/run/netns/cni-5083b258-dd30-de4d-2444-530009efbb4d" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.287 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.287 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.442 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.442 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.443 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.463 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.463 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.467 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:42.491227 containerd[2163]: 2025-09-05 23:54:42.479 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:42.495765 containerd[2163]: time="2025-09-05T23:54:42.492358425Z" level=info msg="TearDown network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\" successfully" Sep 5 23:54:42.495765 containerd[2163]: time="2025-09-05T23:54:42.495119049Z" level=info msg="StopPodSandbox for \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\" returns successfully" Sep 5 23:54:42.503455 systemd[1]: run-netns-cni\x2d5083b258\x2ddd30\x2dde4d\x2d2444\x2d530009efbb4d.mount: Deactivated successfully. Sep 5 23:54:42.506119 containerd[2163]: time="2025-09-05T23:54:42.505899909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-vcd78,Uid:c117decb-6235-448f-af92-cc2c7e502ccf,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.322 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.322 [INFO][5362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" iface="eth0" netns="/var/run/netns/cni-fcde32ed-af8b-0d7b-43da-92c3c4eec3b9" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.326 [INFO][5362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" iface="eth0" netns="/var/run/netns/cni-fcde32ed-af8b-0d7b-43da-92c3c4eec3b9" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.330 [INFO][5362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" iface="eth0" netns="/var/run/netns/cni-fcde32ed-af8b-0d7b-43da-92c3c4eec3b9" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.331 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.331 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.455 [INFO][5389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.457 [INFO][5389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.467 [INFO][5389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.495 [WARNING][5389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.495 [INFO][5389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.504 [INFO][5389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:42.521533 containerd[2163]: 2025-09-05 23:54:42.516 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:42.526057 containerd[2163]: time="2025-09-05T23:54:42.525859749Z" level=info msg="TearDown network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\" successfully" Sep 5 23:54:42.526057 containerd[2163]: time="2025-09-05T23:54:42.525911133Z" level=info msg="StopPodSandbox for \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\" returns successfully" Sep 5 23:54:42.529108 containerd[2163]: time="2025-09-05T23:54:42.528711633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b567d4fc-2p74x,Uid:382a846b-2e68-4366-92ad-add3d9374f37,Namespace:calico-system,Attempt:1,}" Sep 5 23:54:42.583722 containerd[2163]: time="2025-09-05T23:54:42.583421745Z" level=info msg="StartContainer for \"6e1413cc75b95faec4275e4fca91847facc3d5c6077a516f44e23d935fba1391\" returns successfully" Sep 5 23:54:43.038732 systemd-networkd[1690]: calice59c01cfd3: Link UP Sep 5 23:54:43.042447 systemd-networkd[1690]: calice59c01cfd3: Gained carrier Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.798 [INFO][5432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0 calico-kube-controllers-69b567d4fc- calico-system 382a846b-2e68-4366-92ad-add3d9374f37 1006 0 2025-09-05 23:54:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69b567d4fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-93 calico-kube-controllers-69b567d4fc-2p74x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice59c01cfd3 [] [] }} ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.799 [INFO][5432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.935 [INFO][5447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" HandleID="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.935 [INFO][5447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" HandleID="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-93", "pod":"calico-kube-controllers-69b567d4fc-2p74x", "timestamp":"2025-09-05 23:54:42.935519483 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.936 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.936 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.936 [INFO][5447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.956 [INFO][5447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.966 [INFO][5447] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.979 [INFO][5447] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.982 [INFO][5447] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.987 [INFO][5447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.987 [INFO][5447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:42.990 [INFO][5447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700 Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:43.001 [INFO][5447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:43.015 [INFO][5447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.2/26] block=192.168.80.0/26 handle="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:43.015 [INFO][5447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.2/26] handle="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" host="ip-172-31-22-93" Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:43.015 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:43.109047 containerd[2163]: 2025-09-05 23:54:43.016 [INFO][5447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.2/26] IPv6=[] ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" HandleID="k8s-pod-network.b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:43.113432 containerd[2163]: 2025-09-05 23:54:43.024 [INFO][5432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0", GenerateName:"calico-kube-controllers-69b567d4fc-", Namespace:"calico-system", SelfLink:"", UID:"382a846b-2e68-4366-92ad-add3d9374f37", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b567d4fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"calico-kube-controllers-69b567d4fc-2p74x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice59c01cfd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:43.113432 containerd[2163]: 2025-09-05 23:54:43.024 [INFO][5432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.2/32] ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:43.113432 containerd[2163]: 2025-09-05 23:54:43.024 [INFO][5432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice59c01cfd3 ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:43.113432 containerd[2163]: 2025-09-05 23:54:43.042 [INFO][5432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:43.113432 containerd[2163]: 2025-09-05 23:54:43.048 [INFO][5432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0", GenerateName:"calico-kube-controllers-69b567d4fc-", Namespace:"calico-system", SelfLink:"", UID:"382a846b-2e68-4366-92ad-add3d9374f37", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b567d4fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700", Pod:"calico-kube-controllers-69b567d4fc-2p74x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice59c01cfd3", MAC:"76:57:de:3f:38:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:43.113432 containerd[2163]: 2025-09-05 23:54:43.075 [INFO][5432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700" Namespace="calico-system" Pod="calico-kube-controllers-69b567d4fc-2p74x" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:43.201975 systemd[1]: run-netns-cni\x2dfcde32ed\x2daf8b\x2d0d7b\x2d43da\x2d92c3c4eec3b9.mount: Deactivated successfully. Sep 5 23:54:43.220641 containerd[2163]: time="2025-09-05T23:54:43.220340757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:43.220958 containerd[2163]: time="2025-09-05T23:54:43.220659849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:43.220958 containerd[2163]: time="2025-09-05T23:54:43.220723317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:43.221196 containerd[2163]: time="2025-09-05T23:54:43.220948533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:43.227252 systemd-networkd[1690]: caliad2e9cc4fc7: Link UP Sep 5 23:54:43.229311 systemd-networkd[1690]: caliad2e9cc4fc7: Gained carrier Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:42.838 [INFO][5420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0 calico-apiserver-565db755f8- calico-apiserver c117decb-6235-448f-af92-cc2c7e502ccf 1005 0 2025-09-05 23:54:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:565db755f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-93 calico-apiserver-565db755f8-vcd78 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad2e9cc4fc7 [] [] }} ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:42.839 [INFO][5420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:42.945 [INFO][5452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" HandleID="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:42.945 [INFO][5452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" HandleID="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-93", "pod":"calico-apiserver-565db755f8-vcd78", "timestamp":"2025-09-05 23:54:42.945361583 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:42.945 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.016 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.016 [INFO][5452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.059 [INFO][5452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.093 [INFO][5452] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.127 [INFO][5452] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.134 [INFO][5452] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.142 [INFO][5452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.142 [INFO][5452] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.150 [INFO][5452] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.168 [INFO][5452] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.191 [INFO][5452] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.3/26] block=192.168.80.0/26 handle="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.191 [INFO][5452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.3/26] handle="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" host="ip-172-31-22-93" Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.191 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:43.307842 containerd[2163]: 2025-09-05 23:54:43.191 [INFO][5452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.3/26] IPv6=[] ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" HandleID="k8s-pod-network.32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:43.309051 containerd[2163]: 2025-09-05 23:54:43.219 [INFO][5420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c117decb-6235-448f-af92-cc2c7e502ccf", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"calico-apiserver-565db755f8-vcd78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2e9cc4fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:43.309051 containerd[2163]: 2025-09-05 23:54:43.221 [INFO][5420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.3/32] ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:43.309051 containerd[2163]: 2025-09-05 23:54:43.221 [INFO][5420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad2e9cc4fc7 ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:43.309051 containerd[2163]: 2025-09-05 23:54:43.230 [INFO][5420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:43.309051 containerd[2163]: 2025-09-05 23:54:43.232 [INFO][5420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c117decb-6235-448f-af92-cc2c7e502ccf", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee", Pod:"calico-apiserver-565db755f8-vcd78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2e9cc4fc7", MAC:"6e:2e:55:56:5a:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:43.309051 containerd[2163]: 2025-09-05 23:54:43.287 [INFO][5420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-vcd78" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:43.414514 containerd[2163]: time="2025-09-05T23:54:43.413813997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:43.414514 containerd[2163]: time="2025-09-05T23:54:43.413963745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:43.414514 containerd[2163]: time="2025-09-05T23:54:43.414004689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:43.414514 containerd[2163]: time="2025-09-05T23:54:43.414242409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:43.501606 containerd[2163]: time="2025-09-05T23:54:43.501308338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69b567d4fc-2p74x,Uid:382a846b-2e68-4366-92ad-add3d9374f37,Namespace:calico-system,Attempt:1,} returns sandbox id \"b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700\"" Sep 5 23:54:43.508089 containerd[2163]: time="2025-09-05T23:54:43.507626734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:54:43.600655 containerd[2163]: time="2025-09-05T23:54:43.599611606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-vcd78,Uid:c117decb-6235-448f-af92-cc2c7e502ccf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee\"" Sep 5 23:54:44.010274 containerd[2163]: time="2025-09-05T23:54:44.010131272Z" level=info msg="StopPodSandbox for \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\"" Sep 5 23:54:44.011216 containerd[2163]: time="2025-09-05T23:54:44.010922264Z" level=info msg="StopPodSandbox for \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\"" Sep 5 23:54:44.013888 containerd[2163]: time="2025-09-05T23:54:44.013453304Z" level=info msg="StopPodSandbox for \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\"" Sep 5 23:54:44.014901 containerd[2163]: time="2025-09-05T23:54:44.014829596Z" level=info msg="StopPodSandbox for \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\"" Sep 5 23:54:44.015720 containerd[2163]: time="2025-09-05T23:54:44.015647468Z" level=info msg="StopPodSandbox for \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\"" Sep 5 23:54:44.297220 kubelet[3416]: I0905 23:54:44.295923 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77f4776f4-qsqhg" podStartSLOduration=4.080430965 podStartE2EDuration="8.295889446s" podCreationTimestamp="2025-09-05 23:54:36 +0000 UTC" firstStartedPulling="2025-09-05 23:54:37.89563659 +0000 UTC m=+51.300485932" lastFinishedPulling="2025-09-05 23:54:42.111095071 +0000 UTC m=+55.515944413" observedRunningTime="2025-09-05 23:54:43.59304013 +0000 UTC m=+56.997889520" watchObservedRunningTime="2025-09-05 23:54:44.295889446 +0000 UTC m=+57.700738788" Sep 5 23:54:44.525909 systemd-networkd[1690]: calice59c01cfd3: Gained IPv6LL Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.302 [INFO][5617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.303 [INFO][5617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" iface="eth0" netns="/var/run/netns/cni-4b2ce548-ac74-922a-395a-5443106e76f9" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.303 [INFO][5617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" iface="eth0" netns="/var/run/netns/cni-4b2ce548-ac74-922a-395a-5443106e76f9" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.308 [INFO][5617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" iface="eth0" netns="/var/run/netns/cni-4b2ce548-ac74-922a-395a-5443106e76f9" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.308 [INFO][5617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.308 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.589 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.591 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.596 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.646 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.646 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.653 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:44.701360 containerd[2163]: 2025-09-05 23:54:44.670 [INFO][5617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:54:44.714883 containerd[2163]: time="2025-09-05T23:54:44.712737276Z" level=info msg="TearDown network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\" successfully" Sep 5 23:54:44.714883 containerd[2163]: time="2025-09-05T23:54:44.714266940Z" level=info msg="StopPodSandbox for \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\" returns successfully" Sep 5 23:54:44.723785 systemd[1]: run-netns-cni\x2d4b2ce548\x2dac74\x2d922a\x2d395a\x2d5443106e76f9.mount: Deactivated successfully. Sep 5 23:54:44.729354 containerd[2163]: time="2025-09-05T23:54:44.725104596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnppm,Uid:8aa21a8f-5d63-4c31-ba62-2b91293e20d2,Namespace:calico-system,Attempt:1,}" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.289 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.297 [INFO][5612] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" iface="eth0" netns="/var/run/netns/cni-4aa0c406-c691-ca86-8162-7fbd73c842a2" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.298 [INFO][5612] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" iface="eth0" netns="/var/run/netns/cni-4aa0c406-c691-ca86-8162-7fbd73c842a2" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.301 [INFO][5612] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" iface="eth0" netns="/var/run/netns/cni-4aa0c406-c691-ca86-8162-7fbd73c842a2" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.301 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.301 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.637 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.638 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.654 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.694 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.694 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.709 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:44.773254 containerd[2163]: 2025-09-05 23:54:44.751 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:54:44.782727 containerd[2163]: time="2025-09-05T23:54:44.781910532Z" level=info msg="TearDown network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\" successfully" Sep 5 23:54:44.785201 containerd[2163]: time="2025-09-05T23:54:44.782281668Z" level=info msg="StopPodSandbox for \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\" returns successfully" Sep 5 23:54:44.792897 systemd[1]: run-netns-cni\x2d4aa0c406\x2dc691\x2dca86\x2d8162\x2d7fbd73c842a2.mount: Deactivated successfully. Sep 5 23:54:44.801687 containerd[2163]: time="2025-09-05T23:54:44.801285612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-lctqj,Uid:e5c5635c-d655-4227-b462-e9b1f8d42ffd,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.386 [INFO][5611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.386 [INFO][5611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" iface="eth0" netns="/var/run/netns/cni-07bc4d62-d6de-907b-01fb-7d76d91f34f6" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.389 [INFO][5611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" iface="eth0" netns="/var/run/netns/cni-07bc4d62-d6de-907b-01fb-7d76d91f34f6" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.390 [INFO][5611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" iface="eth0" netns="/var/run/netns/cni-07bc4d62-d6de-907b-01fb-7d76d91f34f6" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.390 [INFO][5611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.390 [INFO][5611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.676 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.677 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.730 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.795 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.797 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.809 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:44.844375 containerd[2163]: 2025-09-05 23:54:44.833 [INFO][5611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:54:44.849498 containerd[2163]: time="2025-09-05T23:54:44.845775925Z" level=info msg="TearDown network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\" successfully" Sep 5 23:54:44.849498 containerd[2163]: time="2025-09-05T23:54:44.845842921Z" level=info msg="StopPodSandbox for \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\" returns successfully" Sep 5 23:54:44.857062 systemd[1]: run-netns-cni\x2d07bc4d62\x2dd6de\x2d907b\x2d01fb\x2d7d76d91f34f6.mount: Deactivated successfully. Sep 5 23:54:44.889035 containerd[2163]: time="2025-09-05T23:54:44.888753085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bn4jv,Uid:b9777b13-1b79-4ea7-958f-63691e6fecb7,Namespace:kube-system,Attempt:1,}" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.516 [INFO][5616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.518 [INFO][5616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" iface="eth0" netns="/var/run/netns/cni-276df45d-3d07-9d00-f226-14d041484c2b" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.518 [INFO][5616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" iface="eth0" netns="/var/run/netns/cni-276df45d-3d07-9d00-f226-14d041484c2b" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.523 [INFO][5616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" iface="eth0" netns="/var/run/netns/cni-276df45d-3d07-9d00-f226-14d041484c2b" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.523 [INFO][5616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.523 [INFO][5616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.912 [INFO][5663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.939 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.940 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.976 [WARNING][5663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.976 [INFO][5663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.983 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:45.024030 containerd[2163]: 2025-09-05 23:54:44.997 [INFO][5616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:54:45.027226 containerd[2163]: time="2025-09-05T23:54:45.025488885Z" level=info msg="TearDown network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\" successfully" Sep 5 23:54:45.027226 containerd[2163]: time="2025-09-05T23:54:45.025548405Z" level=info msg="StopPodSandbox for \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\" returns successfully" Sep 5 23:54:45.039736 containerd[2163]: time="2025-09-05T23:54:45.039344482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfwvm,Uid:27939f7c-5277-453f-aea0-098e23380a31,Namespace:calico-system,Attempt:1,}" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.445 [INFO][5618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.477 [INFO][5618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" iface="eth0" netns="/var/run/netns/cni-6c928cf5-65d7-9bf0-9832-36731725e6f4" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.480 [INFO][5618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" iface="eth0" netns="/var/run/netns/cni-6c928cf5-65d7-9bf0-9832-36731725e6f4" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.482 [INFO][5618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" iface="eth0" netns="/var/run/netns/cni-6c928cf5-65d7-9bf0-9832-36731725e6f4" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.482 [INFO][5618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.487 [INFO][5618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.964 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.964 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:44.983 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:45.010 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:45.010 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:45.016 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:45.075677 containerd[2163]: 2025-09-05 23:54:45.030 [INFO][5618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:54:45.081352 containerd[2163]: time="2025-09-05T23:54:45.076227838Z" level=info msg="TearDown network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\" successfully" Sep 5 23:54:45.081352 containerd[2163]: time="2025-09-05T23:54:45.076301650Z" level=info msg="StopPodSandbox for \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\" returns successfully" Sep 5 23:54:45.096209 containerd[2163]: time="2025-09-05T23:54:45.096145210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5jqb,Uid:b02e4030-fdc9-4a12-bd86-85df0b683a74,Namespace:kube-system,Attempt:1,}" Sep 5 23:54:45.228683 systemd-networkd[1690]: caliad2e9cc4fc7: Gained IPv6LL Sep 5 23:54:45.268560 systemd[1]: run-netns-cni\x2d6c928cf5\x2d65d7\x2d9bf0\x2d9832\x2d36731725e6f4.mount: Deactivated successfully. Sep 5 23:54:45.268925 systemd[1]: run-netns-cni\x2d276df45d\x2d3d07\x2d9d00\x2df226\x2d14d041484c2b.mount: Deactivated successfully. Sep 5 23:54:45.567914 systemd[1]: Started sshd@8-172.31.22.93:22-139.178.68.195:48798.service - OpenSSH per-connection server daemon (139.178.68.195:48798). Sep 5 23:54:45.784504 sshd[5763]: Accepted publickey for core from 139.178.68.195 port 48798 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:45.793928 sshd[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:45.815422 systemd-logind[2117]: New session 9 of user core. Sep 5 23:54:45.822124 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:54:45.899933 systemd-networkd[1690]: califd5eacce1c7: Link UP Sep 5 23:54:45.905940 systemd-networkd[1690]: califd5eacce1c7: Gained carrier Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.386 [INFO][5696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0 calico-apiserver-565db755f8- calico-apiserver e5c5635c-d655-4227-b462-e9b1f8d42ffd 1034 0 2025-09-05 23:54:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:565db755f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-93 calico-apiserver-565db755f8-lctqj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd5eacce1c7 [] [] }} ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.387 [INFO][5696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.662 [INFO][5744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" HandleID="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.670 [INFO][5744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" HandleID="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000382010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-93", "pod":"calico-apiserver-565db755f8-lctqj", "timestamp":"2025-09-05 23:54:45.662924017 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.671 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.671 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.671 [INFO][5744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.713 [INFO][5744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.739 [INFO][5744] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.758 [INFO][5744] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.766 [INFO][5744] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.793 [INFO][5744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.793 [INFO][5744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.801 [INFO][5744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.819 [INFO][5744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.859 [INFO][5744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.4/26] block=192.168.80.0/26 handle="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.864 [INFO][5744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.4/26] handle="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" host="ip-172-31-22-93" Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.864 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:45.991809 containerd[2163]: 2025-09-05 23:54:45.864 [INFO][5744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.4/26] IPv6=[] ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" HandleID="k8s-pod-network.402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:46.005652 containerd[2163]: 2025-09-05 23:54:45.885 [INFO][5696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5c5635c-d655-4227-b462-e9b1f8d42ffd", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"calico-apiserver-565db755f8-lctqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd5eacce1c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.005652 containerd[2163]: 2025-09-05 23:54:45.885 [INFO][5696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.4/32] ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:46.005652 containerd[2163]: 2025-09-05 23:54:45.888 [INFO][5696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd5eacce1c7 ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:46.005652 containerd[2163]: 2025-09-05 23:54:45.907 [INFO][5696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:46.005652 containerd[2163]: 2025-09-05 23:54:45.908 [INFO][5696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5c5635c-d655-4227-b462-e9b1f8d42ffd", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d", Pod:"calico-apiserver-565db755f8-lctqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd5eacce1c7", MAC:"ce:32:1d:cd:ef:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.005652 containerd[2163]: 2025-09-05 23:54:45.956 [INFO][5696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d" Namespace="calico-apiserver" Pod="calico-apiserver-565db755f8-lctqj" WorkloadEndpoint="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:54:46.211220 containerd[2163]: time="2025-09-05T23:54:46.208605455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:46.211220 containerd[2163]: time="2025-09-05T23:54:46.208726175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:46.211220 containerd[2163]: time="2025-09-05T23:54:46.208752011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:46.211220 containerd[2163]: time="2025-09-05T23:54:46.208936475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:46.224627 systemd-networkd[1690]: cali3d7bb3a1689: Link UP Sep 5 23:54:46.234842 systemd-networkd[1690]: cali3d7bb3a1689: Gained carrier Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.386 [INFO][5684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0 goldmane-7988f88666- calico-system 8aa21a8f-5d63-4c31-ba62-2b91293e20d2 1035 0 2025-09-05 23:54:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-93 goldmane-7988f88666-xnppm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3d7bb3a1689 [] [] }} ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.386 [INFO][5684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.720 [INFO][5746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" HandleID="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.721 [INFO][5746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" HandleID="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031d3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-93", "pod":"goldmane-7988f88666-xnppm", "timestamp":"2025-09-05 23:54:45.720422197 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.722 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.882 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.882 [INFO][5746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:45.944 [INFO][5746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.005 [INFO][5746] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.033 [INFO][5746] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.042 [INFO][5746] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.052 [INFO][5746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.052 [INFO][5746] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.057 [INFO][5746] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457 Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.081 [INFO][5746] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.106 [INFO][5746] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.5/26] block=192.168.80.0/26 handle="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.106 [INFO][5746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.5/26] handle="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" host="ip-172-31-22-93" Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.106 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:46.353538 containerd[2163]: 2025-09-05 23:54:46.106 [INFO][5746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.5/26] IPv6=[] ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" HandleID="k8s-pod-network.d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:46.363085 containerd[2163]: 2025-09-05 23:54:46.144 [INFO][5684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8aa21a8f-5d63-4c31-ba62-2b91293e20d2", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"goldmane-7988f88666-xnppm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d7bb3a1689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.363085 containerd[2163]: 2025-09-05 23:54:46.148 [INFO][5684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.5/32] ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:46.363085 containerd[2163]: 2025-09-05 23:54:46.150 [INFO][5684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d7bb3a1689 ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:46.363085 containerd[2163]: 2025-09-05 23:54:46.254 [INFO][5684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:46.363085 containerd[2163]: 2025-09-05 23:54:46.262 [INFO][5684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8aa21a8f-5d63-4c31-ba62-2b91293e20d2", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457", Pod:"goldmane-7988f88666-xnppm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d7bb3a1689", MAC:"e2:4f:d7:3f:a5:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.363085 containerd[2163]: 2025-09-05 23:54:46.324 [INFO][5684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457" Namespace="calico-system" Pod="goldmane-7988f88666-xnppm" WorkloadEndpoint="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:54:46.524531 systemd-networkd[1690]: cali724b3a9fb18: Link UP Sep 5 23:54:46.525046 sshd[5763]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:46.527755 systemd-networkd[1690]: cali724b3a9fb18: Gained carrier Sep 5 23:54:46.554456 systemd[1]: sshd@8-172.31.22.93:22-139.178.68.195:48798.service: Deactivated successfully. Sep 5 23:54:46.590134 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:54:46.598004 systemd-logind[2117]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:54:46.602820 systemd-logind[2117]: Removed session 9. Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:45.465 [INFO][5717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0 csi-node-driver- calico-system 27939f7c-5277-453f-aea0-098e23380a31 1038 0 2025-09-05 23:54:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-93 csi-node-driver-lfwvm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali724b3a9fb18 [] [] }} ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:45.465 [INFO][5717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:45.790 [INFO][5759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" HandleID="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:45.791 [INFO][5759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" HandleID="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005c6b90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-93", "pod":"csi-node-driver-lfwvm", "timestamp":"2025-09-05 23:54:45.790712233 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:45.791 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.109 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.110 [INFO][5759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.233 [INFO][5759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.294 [INFO][5759] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.333 [INFO][5759] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.340 [INFO][5759] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.349 [INFO][5759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.350 [INFO][5759] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.361 [INFO][5759] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097 Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.387 [INFO][5759] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.420 [INFO][5759] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.6/26] block=192.168.80.0/26 handle="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.420 [INFO][5759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.6/26] handle="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" host="ip-172-31-22-93" Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.420 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:46.640589 containerd[2163]: 2025-09-05 23:54:46.420 [INFO][5759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.6/26] IPv6=[] ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" HandleID="k8s-pod-network.cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:46.646900 containerd[2163]: 2025-09-05 23:54:46.449 [INFO][5717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27939f7c-5277-453f-aea0-098e23380a31", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"csi-node-driver-lfwvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724b3a9fb18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.646900 containerd[2163]: 2025-09-05 23:54:46.449 [INFO][5717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.6/32] ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:46.646900 containerd[2163]: 2025-09-05 23:54:46.449 [INFO][5717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali724b3a9fb18 ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:46.646900 containerd[2163]: 2025-09-05 23:54:46.545 [INFO][5717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:46.646900 containerd[2163]: 2025-09-05 23:54:46.551 [INFO][5717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27939f7c-5277-453f-aea0-098e23380a31", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097", Pod:"csi-node-driver-lfwvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724b3a9fb18", MAC:"7e:94:e6:0c:c5:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.646900 containerd[2163]: 2025-09-05 23:54:46.604 [INFO][5717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097" Namespace="calico-system" Pod="csi-node-driver-lfwvm" WorkloadEndpoint="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:54:46.670806 containerd[2163]: time="2025-09-05T23:54:46.669907298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:46.670806 containerd[2163]: time="2025-09-05T23:54:46.670004654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:46.670806 containerd[2163]: time="2025-09-05T23:54:46.670068314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:46.670806 containerd[2163]: time="2025-09-05T23:54:46.670254758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:46.780537 systemd-networkd[1690]: cali0b3671f1893: Link UP Sep 5 23:54:46.790660 systemd-networkd[1690]: cali0b3671f1893: Gained carrier Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:45.512 [INFO][5728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0 coredns-7c65d6cfc9- kube-system b02e4030-fdc9-4a12-bd86-85df0b683a74 1037 0 2025-09-05 23:53:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-93 coredns-7c65d6cfc9-m5jqb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0b3671f1893 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:45.516 [INFO][5728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:45.809 [INFO][5767] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" HandleID="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:45.809 [INFO][5767] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" HandleID="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d180), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-93", "pod":"coredns-7c65d6cfc9-m5jqb", "timestamp":"2025-09-05 23:54:45.809402461 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:45.809 [INFO][5767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.421 [INFO][5767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.425 [INFO][5767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.465 [INFO][5767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.511 [INFO][5767] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.552 [INFO][5767] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.563 [INFO][5767] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.607 [INFO][5767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.608 [INFO][5767] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.617 [INFO][5767] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993 Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.657 [INFO][5767] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.699 [INFO][5767] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.7/26] block=192.168.80.0/26 handle="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.701 [INFO][5767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.7/26] handle="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" host="ip-172-31-22-93" Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.702 [INFO][5767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:46.912573 containerd[2163]: 2025-09-05 23:54:46.702 [INFO][5767] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.7/26] IPv6=[] ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" HandleID="k8s-pod-network.d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:46.918370 containerd[2163]: 2025-09-05 23:54:46.743 [INFO][5728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b02e4030-fdc9-4a12-bd86-85df0b683a74", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"coredns-7c65d6cfc9-m5jqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b3671f1893", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.918370 containerd[2163]: 2025-09-05 23:54:46.745 [INFO][5728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.7/32] ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:46.918370 containerd[2163]: 2025-09-05 23:54:46.747 [INFO][5728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b3671f1893 ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:46.918370 containerd[2163]: 2025-09-05 23:54:46.816 [INFO][5728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:46.918370 containerd[2163]: 2025-09-05 23:54:46.817 [INFO][5728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b02e4030-fdc9-4a12-bd86-85df0b683a74", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993", Pod:"coredns-7c65d6cfc9-m5jqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b3671f1893", MAC:"de:67:98:da:25:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:46.918370 containerd[2163]: 2025-09-05 23:54:46.871 [INFO][5728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993" Namespace="kube-system" Pod="coredns-7c65d6cfc9-m5jqb" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:54:46.968903 containerd[2163]: time="2025-09-05T23:54:46.968814459Z" level=info msg="StopPodSandbox for \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\"" Sep 5 23:54:47.057601 containerd[2163]: time="2025-09-05T23:54:47.055848240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:47.063387 containerd[2163]: time="2025-09-05T23:54:47.058529160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:47.063387 containerd[2163]: time="2025-09-05T23:54:47.060365736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:47.083446 containerd[2163]: time="2025-09-05T23:54:47.081050244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:47.104500 containerd[2163]: time="2025-09-05T23:54:47.103708824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565db755f8-lctqj,Uid:e5c5635c-d655-4227-b462-e9b1f8d42ffd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d\"" Sep 5 23:54:47.209663 systemd-networkd[1690]: califd5eacce1c7: Gained IPv6LL Sep 5 23:54:47.367019 systemd-networkd[1690]: cali8795ad63e11: Link UP Sep 5 23:54:47.390860 containerd[2163]: time="2025-09-05T23:54:47.384582001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:47.396384 containerd[2163]: time="2025-09-05T23:54:47.388618345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:47.396384 containerd[2163]: time="2025-09-05T23:54:47.393932617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:47.399699 systemd-networkd[1690]: cali8795ad63e11: Gained carrier Sep 5 23:54:47.410335 containerd[2163]: time="2025-09-05T23:54:47.398890405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:47.465153 systemd-networkd[1690]: cali3d7bb3a1689: Gained IPv6LL Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:45.527 [INFO][5706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0 coredns-7c65d6cfc9- kube-system b9777b13-1b79-4ea7-958f-63691e6fecb7 1036 0 2025-09-05 23:53:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-93 coredns-7c65d6cfc9-bn4jv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8795ad63e11 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:45.529 [INFO][5706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.026 [INFO][5765] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" HandleID="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.027 [INFO][5765] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" HandleID="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000602d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-93", "pod":"coredns-7c65d6cfc9-bn4jv", "timestamp":"2025-09-05 23:54:46.01713349 +0000 UTC"}, Hostname:"ip-172-31-22-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.030 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.702 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.702 [INFO][5765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-93' Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.813 [INFO][5765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.922 [INFO][5765] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:46.980 [INFO][5765] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.010 [INFO][5765] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.072 [INFO][5765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.072 [INFO][5765] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.106 [INFO][5765] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12 Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.159 [INFO][5765] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.238 [INFO][5765] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.8/26] block=192.168.80.0/26 handle="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.238 [INFO][5765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.8/26] handle="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" host="ip-172-31-22-93" Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.238 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:47.506769 containerd[2163]: 2025-09-05 23:54:47.238 [INFO][5765] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.8/26] IPv6=[] ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" HandleID="k8s-pod-network.2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:47.507876 containerd[2163]: 2025-09-05 23:54:47.273 [INFO][5706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b9777b13-1b79-4ea7-958f-63691e6fecb7", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"", Pod:"coredns-7c65d6cfc9-bn4jv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8795ad63e11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:47.507876 containerd[2163]: 2025-09-05 23:54:47.277 [INFO][5706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.8/32] ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:47.507876 containerd[2163]: 2025-09-05 23:54:47.279 [INFO][5706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8795ad63e11 ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:47.507876 containerd[2163]: 2025-09-05 23:54:47.427 [INFO][5706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:47.507876 containerd[2163]: 2025-09-05 23:54:47.434 [INFO][5706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b9777b13-1b79-4ea7-958f-63691e6fecb7", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12", Pod:"coredns-7c65d6cfc9-bn4jv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8795ad63e11", MAC:"22:e3:5f:ab:ca:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:47.507876 containerd[2163]: 2025-09-05 23:54:47.485 [INFO][5706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bn4jv" WorkloadEndpoint="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:54:47.829985 containerd[2163]: time="2025-09-05T23:54:47.829515675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lfwvm,Uid:27939f7c-5277-453f-aea0-098e23380a31,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097\"" Sep 5 23:54:47.851212 containerd[2163]: time="2025-09-05T23:54:47.838329591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:47.851212 containerd[2163]: time="2025-09-05T23:54:47.838451175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:47.851212 containerd[2163]: time="2025-09-05T23:54:47.838540335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:47.851212 containerd[2163]: time="2025-09-05T23:54:47.838781391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:47.963690 containerd[2163]: time="2025-09-05T23:54:47.963035152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xnppm,Uid:8aa21a8f-5d63-4c31-ba62-2b91293e20d2,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457\"" Sep 5 23:54:48.004264 containerd[2163]: time="2025-09-05T23:54:48.004207788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m5jqb,Uid:b02e4030-fdc9-4a12-bd86-85df0b683a74,Namespace:kube-system,Attempt:1,} returns sandbox id \"d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993\"" Sep 5 23:54:48.021772 containerd[2163]: time="2025-09-05T23:54:48.021701988Z" level=info msg="CreateContainer within sandbox \"d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:54:48.112928 containerd[2163]: time="2025-09-05T23:54:48.112769353Z" level=info msg="CreateContainer within sandbox \"d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b8fc6105e49432bc7fc5dd5ba21f486c1e1e23570609585db3b19b00f59c9c9\"" Sep 5 23:54:48.118709 containerd[2163]: time="2025-09-05T23:54:48.118286509Z" level=info msg="StartContainer for \"4b8fc6105e49432bc7fc5dd5ba21f486c1e1e23570609585db3b19b00f59c9c9\"" Sep 5 23:54:48.170138 systemd-networkd[1690]: cali724b3a9fb18: Gained IPv6LL Sep 5 23:54:48.177531 containerd[2163]: time="2025-09-05T23:54:48.176720269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bn4jv,Uid:b9777b13-1b79-4ea7-958f-63691e6fecb7,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12\"" Sep 5 23:54:48.200328 containerd[2163]: time="2025-09-05T23:54:48.200239009Z" level=info msg="CreateContainer within sandbox \"2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:47.848 [WARNING][5936] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:47.848 [INFO][5936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:47.848 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" iface="eth0" netns="" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:47.848 [INFO][5936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:47.848 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:48.225 [INFO][6042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:48.225 [INFO][6042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:48.225 [INFO][6042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:48.248 [WARNING][6042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:48.248 [INFO][6042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:48.253 [INFO][6042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:48.282625 containerd[2163]: 2025-09-05 23:54:48.272 [INFO][5936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:48.288520 containerd[2163]: time="2025-09-05T23:54:48.285598682Z" level=info msg="TearDown network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\" successfully" Sep 5 23:54:48.288520 containerd[2163]: time="2025-09-05T23:54:48.285657842Z" level=info msg="StopPodSandbox for \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\" returns successfully" Sep 5 23:54:48.294289 containerd[2163]: time="2025-09-05T23:54:48.292492382Z" level=info msg="RemovePodSandbox for \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\"" Sep 5 23:54:48.294971 containerd[2163]: time="2025-09-05T23:54:48.294801770Z" level=info msg="Forcibly stopping sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\"" Sep 5 23:54:48.300491 containerd[2163]: time="2025-09-05T23:54:48.297059162Z" level=info msg="CreateContainer within sandbox \"2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a969822571159bf2f07678b752c4b63da8a95f34ac5b420f94127928b2694678\"" Sep 5 23:54:48.301251 containerd[2163]: time="2025-09-05T23:54:48.301161890Z" level=info msg="StartContainer for \"a969822571159bf2f07678b752c4b63da8a95f34ac5b420f94127928b2694678\"" Sep 5 23:54:48.397255 containerd[2163]: time="2025-09-05T23:54:48.397084190Z" level=info msg="StartContainer for \"4b8fc6105e49432bc7fc5dd5ba21f486c1e1e23570609585db3b19b00f59c9c9\" returns successfully" Sep 5 23:54:48.482653 systemd[1]: run-containerd-runc-k8s.io-a969822571159bf2f07678b752c4b63da8a95f34ac5b420f94127928b2694678-runc.fdEf0y.mount: Deactivated successfully. Sep 5 23:54:48.552983 systemd-networkd[1690]: cali0b3671f1893: Gained IPv6LL Sep 5 23:54:48.778013 kubelet[3416]: I0905 23:54:48.777885 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m5jqb" podStartSLOduration=56.777856936 podStartE2EDuration="56.777856936s" podCreationTimestamp="2025-09-05 23:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:54:48.7770214 +0000 UTC m=+62.181870742" watchObservedRunningTime="2025-09-05 23:54:48.777856936 +0000 UTC m=+62.182706278" Sep 5 23:54:48.817159 containerd[2163]: time="2025-09-05T23:54:48.816360880Z" level=info msg="StartContainer for \"a969822571159bf2f07678b752c4b63da8a95f34ac5b420f94127928b2694678\" returns successfully" Sep 5 23:54:48.937011 systemd-networkd[1690]: cali8795ad63e11: Gained IPv6LL Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.638 [WARNING][6118] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" WorkloadEndpoint="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.639 [INFO][6118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.639 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" iface="eth0" netns="" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.639 [INFO][6118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.639 [INFO][6118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.938 [INFO][6155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.946 [INFO][6155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.946 [INFO][6155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.995 [WARNING][6155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:48.999 [INFO][6155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" HandleID="k8s-pod-network.84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Workload="ip--172--31--22--93-k8s-whisker--7496b667cf--hkdvd-eth0" Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:49.009 [INFO][6155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:49.029973 containerd[2163]: 2025-09-05 23:54:49.018 [INFO][6118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62" Sep 5 23:54:49.029973 containerd[2163]: time="2025-09-05T23:54:49.029893465Z" level=info msg="TearDown network for sandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\" successfully" Sep 5 23:54:49.069372 containerd[2163]: time="2025-09-05T23:54:49.069283694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:54:49.069536 containerd[2163]: time="2025-09-05T23:54:49.069416774Z" level=info msg="RemovePodSandbox \"84a6ab359cf8c5bd38007d2ed354f65074703fbbd20789d1e64a0009b67bae62\" returns successfully" Sep 5 23:54:49.074675 containerd[2163]: time="2025-09-05T23:54:49.074591150Z" level=info msg="StopPodSandbox for \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\"" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.429 [WARNING][6205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c117decb-6235-448f-af92-cc2c7e502ccf", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee", Pod:"calico-apiserver-565db755f8-vcd78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2e9cc4fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.430 [INFO][6205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.430 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" iface="eth0" netns="" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.430 [INFO][6205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.430 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.628 [INFO][6214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.632 [INFO][6214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.633 [INFO][6214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.656 [WARNING][6214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.657 [INFO][6214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.662 [INFO][6214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:49.683578 containerd[2163]: 2025-09-05 23:54:49.675 [INFO][6205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:49.690211 containerd[2163]: time="2025-09-05T23:54:49.683636189Z" level=info msg="TearDown network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\" successfully" Sep 5 23:54:49.690211 containerd[2163]: time="2025-09-05T23:54:49.683682929Z" level=info msg="StopPodSandbox for \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\" returns successfully" Sep 5 23:54:49.690211 containerd[2163]: time="2025-09-05T23:54:49.686204933Z" level=info msg="RemovePodSandbox for \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\"" Sep 5 23:54:49.690211 containerd[2163]: time="2025-09-05T23:54:49.686265365Z" level=info msg="Forcibly stopping sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\"" Sep 5 23:54:49.906575 kubelet[3416]: I0905 23:54:49.906364 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bn4jv" podStartSLOduration=57.906334854 podStartE2EDuration="57.906334854s" podCreationTimestamp="2025-09-05 23:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:54:49.900992274 +0000 UTC m=+63.305841628" watchObservedRunningTime="2025-09-05 23:54:49.906334854 +0000 UTC m=+63.311184208" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:49.983 [WARNING][6229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c117decb-6235-448f-af92-cc2c7e502ccf", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee", Pod:"calico-apiserver-565db755f8-vcd78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad2e9cc4fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:49.992 [INFO][6229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:49.992 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" iface="eth0" netns="" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:49.997 [INFO][6229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:49.997 [INFO][6229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:50.285 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:50.285 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:50.285 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:50.318 [WARNING][6238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:50.319 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" HandleID="k8s-pod-network.6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--vcd78-eth0" Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:50.323 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:50.348786 containerd[2163]: 2025-09-05 23:54:50.334 [INFO][6229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074" Sep 5 23:54:50.353167 containerd[2163]: time="2025-09-05T23:54:50.351038032Z" level=info msg="TearDown network for sandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\" successfully" Sep 5 23:54:50.394650 containerd[2163]: time="2025-09-05T23:54:50.394155784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:54:50.394650 containerd[2163]: time="2025-09-05T23:54:50.394315432Z" level=info msg="RemovePodSandbox \"6a073209dc180ee26ee84732c46de0aaae0bc5845343f5d203fb1a2c92ba2074\" returns successfully" Sep 5 23:54:50.401045 containerd[2163]: time="2025-09-05T23:54:50.399856996Z" level=info msg="StopPodSandbox for \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\"" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.584 [WARNING][6264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0", GenerateName:"calico-kube-controllers-69b567d4fc-", Namespace:"calico-system", SelfLink:"", UID:"382a846b-2e68-4366-92ad-add3d9374f37", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b567d4fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700", Pod:"calico-kube-controllers-69b567d4fc-2p74x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice59c01cfd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.584 [INFO][6264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.584 [INFO][6264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" iface="eth0" netns="" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.585 [INFO][6264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.585 [INFO][6264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.711 [INFO][6271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.712 [INFO][6271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.712 [INFO][6271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.747 [WARNING][6271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.747 [INFO][6271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.753 [INFO][6271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:50.765291 containerd[2163]: 2025-09-05 23:54:50.760 [INFO][6264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:50.767787 containerd[2163]: time="2025-09-05T23:54:50.765934614Z" level=info msg="TearDown network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\" successfully" Sep 5 23:54:50.767787 containerd[2163]: time="2025-09-05T23:54:50.767001786Z" level=info msg="StopPodSandbox for \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\" returns successfully" Sep 5 23:54:50.769544 containerd[2163]: time="2025-09-05T23:54:50.769419774Z" level=info msg="RemovePodSandbox for \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\"" Sep 5 23:54:50.769738 containerd[2163]: time="2025-09-05T23:54:50.769547274Z" level=info msg="Forcibly stopping sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\"" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:50.925 [WARNING][6286] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0", GenerateName:"calico-kube-controllers-69b567d4fc-", Namespace:"calico-system", SelfLink:"", UID:"382a846b-2e68-4366-92ad-add3d9374f37", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69b567d4fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700", Pod:"calico-kube-controllers-69b567d4fc-2p74x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice59c01cfd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:50.927 [INFO][6286] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:50.928 [INFO][6286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" iface="eth0" netns="" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:50.928 [INFO][6286] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:50.928 [INFO][6286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:51.039 [INFO][6293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:51.040 [INFO][6293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:51.040 [INFO][6293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:51.062 [WARNING][6293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:51.062 [INFO][6293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" HandleID="k8s-pod-network.07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Workload="ip--172--31--22--93-k8s-calico--kube--controllers--69b567d4fc--2p74x-eth0" Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:51.066 [INFO][6293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:51.081441 containerd[2163]: 2025-09-05 23:54:51.072 [INFO][6286] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9" Sep 5 23:54:51.081441 containerd[2163]: time="2025-09-05T23:54:51.081353968Z" level=info msg="TearDown network for sandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\" successfully" Sep 5 23:54:51.093230 containerd[2163]: time="2025-09-05T23:54:51.091859812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:54:51.093230 containerd[2163]: time="2025-09-05T23:54:51.091971232Z" level=info msg="RemovePodSandbox \"07343d759e1dcecf8d6fcc671b8083f29d0329aa3a6845d223c6234f7a1b54c9\" returns successfully" Sep 5 23:54:51.407407 ntpd[2101]: Listen normally on 6 vxlan.calico 192.168.80.0:123 Sep 5 23:54:51.408093 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 6 vxlan.calico 192.168.80.0:123 Sep 5 23:54:51.408093 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 7 cali61a075d13e6 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 5 23:54:51.408093 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 8 vxlan.calico [fe80::64f2:32ff:fe59:8bcc%5]:123 Sep 5 23:54:51.408093 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 9 calice59c01cfd3 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 5 23:54:51.408093 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 10 caliad2e9cc4fc7 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 5 23:54:51.408093 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 11 califd5eacce1c7 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 5 23:54:51.408093 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 12 cali3d7bb3a1689 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 5 23:54:51.407654 ntpd[2101]: Listen normally on 7 cali61a075d13e6 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 5 23:54:51.410375 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 13 cali724b3a9fb18 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 5 23:54:51.410375 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 14 cali0b3671f1893 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 5 23:54:51.410375 ntpd[2101]: 5 Sep 23:54:51 ntpd[2101]: Listen normally on 15 cali8795ad63e11 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 5 23:54:51.407752 ntpd[2101]: Listen normally on 8 vxlan.calico [fe80::64f2:32ff:fe59:8bcc%5]:123 Sep 5 23:54:51.407830 ntpd[2101]: Listen normally on 9 calice59c01cfd3 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 5 23:54:51.407902 ntpd[2101]: Listen normally on 10 caliad2e9cc4fc7 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 5 23:54:51.407975 ntpd[2101]: Listen normally on 11 califd5eacce1c7 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 5 23:54:51.408046 ntpd[2101]: Listen normally on 12 cali3d7bb3a1689 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 5 23:54:51.408119 ntpd[2101]: Listen normally on 13 cali724b3a9fb18 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 5 23:54:51.408191 ntpd[2101]: Listen normally on 14 cali0b3671f1893 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 5 23:54:51.408261 ntpd[2101]: Listen normally on 15 cali8795ad63e11 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 5 23:54:51.517443 containerd[2163]: time="2025-09-05T23:54:51.517262358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:51.521111 containerd[2163]: time="2025-09-05T23:54:51.520740690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:54:51.523532 containerd[2163]: time="2025-09-05T23:54:51.523326954Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:51.528804 containerd[2163]: time="2025-09-05T23:54:51.528693618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:51.532016 containerd[2163]: time="2025-09-05T23:54:51.530456466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 8.022757972s" Sep 5 23:54:51.532016 containerd[2163]: time="2025-09-05T23:54:51.530566518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:54:51.534187 containerd[2163]: time="2025-09-05T23:54:51.534112338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:54:51.568564 systemd[1]: Started sshd@9-172.31.22.93:22-139.178.68.195:60596.service - OpenSSH per-connection server daemon (139.178.68.195:60596). Sep 5 23:54:51.582092 containerd[2163]: time="2025-09-05T23:54:51.581622714Z" level=info msg="CreateContainer within sandbox \"b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:54:51.612553 containerd[2163]: time="2025-09-05T23:54:51.611878902Z" level=info msg="CreateContainer within sandbox \"b19fd81b4edb2e29309b1e1c8b19f232914f05d34445426d066b4837dd635700\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b42124cb19e090c3ac689ad247b425c35bd6ce3a8422bc757654d43346545ff6\"" Sep 5 23:54:51.615935 containerd[2163]: time="2025-09-05T23:54:51.613947978Z" level=info msg="StartContainer for \"b42124cb19e090c3ac689ad247b425c35bd6ce3a8422bc757654d43346545ff6\"" Sep 5 23:54:51.828400 sshd[6301]: Accepted publickey for core from 139.178.68.195 port 60596 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:51.838371 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:51.860340 systemd-logind[2117]: New session 10 of user core. Sep 5 23:54:51.866042 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:54:51.916378 containerd[2163]: time="2025-09-05T23:54:51.916057256Z" level=info msg="StartContainer for \"b42124cb19e090c3ac689ad247b425c35bd6ce3a8422bc757654d43346545ff6\" returns successfully" Sep 5 23:54:52.266893 sshd[6301]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:52.278508 systemd[1]: sshd@9-172.31.22.93:22-139.178.68.195:60596.service: Deactivated successfully. Sep 5 23:54:52.285211 systemd-logind[2117]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:54:52.286663 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:54:52.302014 systemd[1]: Started sshd@10-172.31.22.93:22-139.178.68.195:60608.service - OpenSSH per-connection server daemon (139.178.68.195:60608). Sep 5 23:54:52.305549 systemd-logind[2117]: Removed session 10. Sep 5 23:54:52.485950 sshd[6359]: Accepted publickey for core from 139.178.68.195 port 60608 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:52.488971 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:52.498431 systemd-logind[2117]: New session 11 of user core. Sep 5 23:54:52.506144 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:54:52.901355 kubelet[3416]: I0905 23:54:52.901067 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69b567d4fc-2p74x" podStartSLOduration=26.875389513000002 podStartE2EDuration="34.901008165s" podCreationTimestamp="2025-09-05 23:54:18 +0000 UTC" firstStartedPulling="2025-09-05 23:54:43.50664601 +0000 UTC m=+56.911495352" lastFinishedPulling="2025-09-05 23:54:51.532264674 +0000 UTC m=+64.937114004" observedRunningTime="2025-09-05 23:54:52.896905365 +0000 UTC m=+66.301754743" watchObservedRunningTime="2025-09-05 23:54:52.901008165 +0000 UTC m=+66.305857507" Sep 5 23:54:52.985411 sshd[6359]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:53.012031 systemd[1]: sshd@10-172.31.22.93:22-139.178.68.195:60608.service: Deactivated successfully. Sep 5 23:54:53.034891 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:54:53.055630 systemd-logind[2117]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:54:53.083193 systemd[1]: Started sshd@11-172.31.22.93:22-139.178.68.195:60620.service - OpenSSH per-connection server daemon (139.178.68.195:60620). Sep 5 23:54:53.096438 systemd-logind[2117]: Removed session 11. Sep 5 23:54:53.300610 sshd[6390]: Accepted publickey for core from 139.178.68.195 port 60620 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:53.303101 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:53.317650 systemd-logind[2117]: New session 12 of user core. Sep 5 23:54:53.322047 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:54:53.715945 sshd[6390]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:53.732445 systemd-logind[2117]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:54:53.734048 systemd[1]: sshd@11-172.31.22.93:22-139.178.68.195:60620.service: Deactivated successfully. Sep 5 23:54:53.744366 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:54:53.751361 systemd-logind[2117]: Removed session 12. Sep 5 23:54:54.710558 containerd[2163]: time="2025-09-05T23:54:54.710203630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:54.713131 containerd[2163]: time="2025-09-05T23:54:54.712686238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:54:54.715517 containerd[2163]: time="2025-09-05T23:54:54.715350994Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:54.722396 containerd[2163]: time="2025-09-05T23:54:54.722318518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:54.725437 containerd[2163]: time="2025-09-05T23:54:54.725359594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.191132368s" Sep 5 23:54:54.725437 containerd[2163]: time="2025-09-05T23:54:54.725436562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:54:54.730362 containerd[2163]: time="2025-09-05T23:54:54.729216982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:54:54.730362 containerd[2163]: time="2025-09-05T23:54:54.730048990Z" level=info msg="CreateContainer within sandbox \"32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:54:54.770377 containerd[2163]: time="2025-09-05T23:54:54.767067886Z" level=info msg="CreateContainer within sandbox \"32d1ca4b597d3606d205d573b21bf0906670ab04b8572fcbb27cca7036a93cee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8945a433d80a7c0bece0887d6ea5ecb73cd55fc3158d8df8a5acfb10f7f53d73\"" Sep 5 23:54:54.774919 containerd[2163]: time="2025-09-05T23:54:54.771925066Z" level=info msg="StartContainer for \"8945a433d80a7c0bece0887d6ea5ecb73cd55fc3158d8df8a5acfb10f7f53d73\"" Sep 5 23:54:54.945577 containerd[2163]: time="2025-09-05T23:54:54.945452783Z" level=info msg="StartContainer for \"8945a433d80a7c0bece0887d6ea5ecb73cd55fc3158d8df8a5acfb10f7f53d73\" returns successfully" Sep 5 23:54:55.081785 containerd[2163]: time="2025-09-05T23:54:55.081596383Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:55.087502 containerd[2163]: time="2025-09-05T23:54:55.086930647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:54:55.093969 containerd[2163]: time="2025-09-05T23:54:55.093870871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 364.576129ms" Sep 5 23:54:55.094145 containerd[2163]: time="2025-09-05T23:54:55.093970279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:54:55.098969 containerd[2163]: time="2025-09-05T23:54:55.098281040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:54:55.103578 containerd[2163]: time="2025-09-05T23:54:55.102598880Z" level=info msg="CreateContainer within sandbox \"402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:54:55.140061 containerd[2163]: time="2025-09-05T23:54:55.139851380Z" level=info msg="CreateContainer within sandbox \"402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0f66a831684966ccce063e3a67e29acf0fc247e7b5bb1ca9fe9f7297c676297\"" Sep 5 23:54:55.143686 containerd[2163]: time="2025-09-05T23:54:55.143626304Z" level=info msg="StartContainer for \"c0f66a831684966ccce063e3a67e29acf0fc247e7b5bb1ca9fe9f7297c676297\"" Sep 5 23:54:55.370345 containerd[2163]: time="2025-09-05T23:54:55.370171089Z" level=info msg="StartContainer for \"c0f66a831684966ccce063e3a67e29acf0fc247e7b5bb1ca9fe9f7297c676297\" returns successfully" Sep 5 23:54:55.975587 kubelet[3416]: I0905 23:54:55.975056 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-565db755f8-lctqj" podStartSLOduration=42.995352029 podStartE2EDuration="50.97502178s" podCreationTimestamp="2025-09-05 23:54:05 +0000 UTC" firstStartedPulling="2025-09-05 23:54:47.115830288 +0000 UTC m=+60.520679630" lastFinishedPulling="2025-09-05 23:54:55.095500027 +0000 UTC m=+68.500349381" observedRunningTime="2025-09-05 23:54:55.97283442 +0000 UTC m=+69.377683762" watchObservedRunningTime="2025-09-05 23:54:55.97502178 +0000 UTC m=+69.379871218" Sep 5 23:54:55.982755 kubelet[3416]: I0905 23:54:55.981680 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-565db755f8-vcd78" podStartSLOduration=39.86731368 podStartE2EDuration="50.980900472s" podCreationTimestamp="2025-09-05 23:54:05 +0000 UTC" firstStartedPulling="2025-09-05 23:54:43.613558414 +0000 UTC m=+57.018407756" lastFinishedPulling="2025-09-05 23:54:54.727145194 +0000 UTC m=+68.131994548" observedRunningTime="2025-09-05 23:54:55.939377472 +0000 UTC m=+69.344226910" watchObservedRunningTime="2025-09-05 23:54:55.980900472 +0000 UTC m=+69.385749898" Sep 5 23:54:56.933509 kubelet[3416]: I0905 23:54:56.932345 3416 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:54:57.074443 containerd[2163]: time="2025-09-05T23:54:57.074328813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:57.078285 containerd[2163]: time="2025-09-05T23:54:57.077502933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:54:57.081247 containerd[2163]: time="2025-09-05T23:54:57.081055737Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:57.091778 containerd[2163]: time="2025-09-05T23:54:57.091703097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:57.097772 containerd[2163]: time="2025-09-05T23:54:57.097408713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.999048221s" Sep 5 23:54:57.097772 containerd[2163]: time="2025-09-05T23:54:57.097532469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:54:57.103101 containerd[2163]: time="2025-09-05T23:54:57.101953101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:54:57.108364 containerd[2163]: time="2025-09-05T23:54:57.107393529Z" level=info msg="CreateContainer within sandbox \"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:54:57.183322 containerd[2163]: time="2025-09-05T23:54:57.183234454Z" level=info msg="CreateContainer within sandbox \"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b88e17163329429ccf87f0f56f69c970b782e2ff35eab65234a12133a352a120\"" Sep 5 23:54:57.184011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724897957.mount: Deactivated successfully. Sep 5 23:54:57.190929 containerd[2163]: time="2025-09-05T23:54:57.189709702Z" level=info msg="StartContainer for \"b88e17163329429ccf87f0f56f69c970b782e2ff35eab65234a12133a352a120\"" Sep 5 23:54:57.730669 containerd[2163]: time="2025-09-05T23:54:57.730005541Z" level=info msg="StartContainer for \"b88e17163329429ccf87f0f56f69c970b782e2ff35eab65234a12133a352a120\" returns successfully" Sep 5 23:54:57.944093 kubelet[3416]: I0905 23:54:57.944042 3416 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:54:58.762427 systemd[1]: Started sshd@12-172.31.22.93:22-139.178.68.195:60626.service - OpenSSH per-connection server daemon (139.178.68.195:60626). Sep 5 23:54:59.099177 sshd[6562]: Accepted publickey for core from 139.178.68.195 port 60626 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:59.108149 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:59.141214 systemd-logind[2117]: New session 13 of user core. Sep 5 23:54:59.152569 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:54:59.596603 sshd[6562]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:59.618673 systemd[1]: sshd@12-172.31.22.93:22-139.178.68.195:60626.service: Deactivated successfully. Sep 5 23:54:59.639125 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:54:59.646386 systemd-logind[2117]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:54:59.654403 systemd-logind[2117]: Removed session 13. Sep 5 23:55:00.866765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783838705.mount: Deactivated successfully. Sep 5 23:55:01.228591 systemd-resolved[2028]: Under memory pressure, flushing caches. Sep 5 23:55:01.230022 systemd-journald[1610]: Under memory pressure, flushing caches. Sep 5 23:55:01.229732 systemd-resolved[2028]: Flushed all caches. Sep 5 23:55:02.562726 containerd[2163]: time="2025-09-05T23:55:02.562546073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:02.567402 containerd[2163]: time="2025-09-05T23:55:02.567340193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:55:02.573501 containerd[2163]: time="2025-09-05T23:55:02.571215845Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:02.585151 containerd[2163]: time="2025-09-05T23:55:02.585069557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:02.590567 containerd[2163]: time="2025-09-05T23:55:02.590063561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 5.48804068s" Sep 5 23:55:02.590567 containerd[2163]: time="2025-09-05T23:55:02.590138921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:55:02.599188 containerd[2163]: time="2025-09-05T23:55:02.599119853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:55:02.600865 containerd[2163]: time="2025-09-05T23:55:02.599733317Z" level=info msg="CreateContainer within sandbox \"d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:55:02.635892 containerd[2163]: time="2025-09-05T23:55:02.635826029Z" level=info msg="CreateContainer within sandbox \"d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"33316124f948f540ee47977dfe2c10a2a7e5b6a914988f0d72d8da655edfad20\"" Sep 5 23:55:02.643037 containerd[2163]: time="2025-09-05T23:55:02.638011193Z" level=info msg="StartContainer for \"33316124f948f540ee47977dfe2c10a2a7e5b6a914988f0d72d8da655edfad20\"" Sep 5 23:55:02.788392 systemd[1]: run-containerd-runc-k8s.io-33316124f948f540ee47977dfe2c10a2a7e5b6a914988f0d72d8da655edfad20-runc.cTcVHe.mount: Deactivated successfully. Sep 5 23:55:02.980012 containerd[2163]: time="2025-09-05T23:55:02.979942927Z" level=info msg="StartContainer for \"33316124f948f540ee47977dfe2c10a2a7e5b6a914988f0d72d8da655edfad20\" returns successfully" Sep 5 23:55:04.612412 containerd[2163]: time="2025-09-05T23:55:04.612335239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:04.614168 containerd[2163]: time="2025-09-05T23:55:04.614099299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:55:04.615378 containerd[2163]: time="2025-09-05T23:55:04.615176227Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:04.630630 containerd[2163]: time="2025-09-05T23:55:04.630531379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:04.633790 systemd[1]: Started sshd@13-172.31.22.93:22-139.178.68.195:42100.service - OpenSSH per-connection server daemon (139.178.68.195:42100). Sep 5 23:55:04.644120 containerd[2163]: time="2025-09-05T23:55:04.644008963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.043495214s" Sep 5 23:55:04.644120 containerd[2163]: time="2025-09-05T23:55:04.644097259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:55:04.658225 containerd[2163]: time="2025-09-05T23:55:04.658113367Z" level=info msg="CreateContainer within sandbox \"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:55:04.693530 containerd[2163]: time="2025-09-05T23:55:04.691033807Z" level=info msg="CreateContainer within sandbox \"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"71497fcbc3e0bb3c29d76a3bc88c548ee4fc64cde46fb4eddd58951911c81811\"" Sep 5 23:55:04.700618 containerd[2163]: time="2025-09-05T23:55:04.700166959Z" level=info msg="StartContainer for \"71497fcbc3e0bb3c29d76a3bc88c548ee4fc64cde46fb4eddd58951911c81811\"" Sep 5 23:55:04.719714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300550320.mount: Deactivated successfully. Sep 5 23:55:04.868606 containerd[2163]: time="2025-09-05T23:55:04.866399480Z" level=info msg="StartContainer for \"71497fcbc3e0bb3c29d76a3bc88c548ee4fc64cde46fb4eddd58951911c81811\" returns successfully" Sep 5 23:55:04.874335 sshd[6661]: Accepted publickey for core from 139.178.68.195 port 42100 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:04.880195 sshd[6661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:04.891960 systemd-logind[2117]: New session 14 of user core. Sep 5 23:55:04.904951 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:55:05.055039 kubelet[3416]: I0905 23:55:05.054898 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-xnppm" podStartSLOduration=32.426846796 podStartE2EDuration="47.054871121s" podCreationTimestamp="2025-09-05 23:54:18 +0000 UTC" firstStartedPulling="2025-09-05 23:54:47.968773888 +0000 UTC m=+61.373623230" lastFinishedPulling="2025-09-05 23:55:02.596798141 +0000 UTC m=+76.001647555" observedRunningTime="2025-09-05 23:55:04.040114948 +0000 UTC m=+77.444964290" watchObservedRunningTime="2025-09-05 23:55:05.054871121 +0000 UTC m=+78.459720451" Sep 5 23:55:05.059561 kubelet[3416]: I0905 23:55:05.055289 3416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lfwvm" podStartSLOduration=30.26215925 podStartE2EDuration="47.055271885s" podCreationTimestamp="2025-09-05 23:54:18 +0000 UTC" firstStartedPulling="2025-09-05 23:54:47.856164064 +0000 UTC m=+61.261013406" lastFinishedPulling="2025-09-05 23:55:04.649276711 +0000 UTC m=+78.054126041" observedRunningTime="2025-09-05 23:55:05.054788609 +0000 UTC m=+78.459637987" watchObservedRunningTime="2025-09-05 23:55:05.055271885 +0000 UTC m=+78.460121227" Sep 5 23:55:05.285249 kubelet[3416]: I0905 23:55:05.282776 3416 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:55:05.285249 kubelet[3416]: I0905 23:55:05.282870 3416 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:55:05.311717 sshd[6661]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:05.340745 systemd[1]: sshd@13-172.31.22.93:22-139.178.68.195:42100.service: Deactivated successfully. Sep 5 23:55:05.355643 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:55:05.362054 systemd-logind[2117]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:55:05.369555 systemd-logind[2117]: Removed session 14. Sep 5 23:55:10.349054 systemd[1]: Started sshd@14-172.31.22.93:22-139.178.68.195:56250.service - OpenSSH per-connection server daemon (139.178.68.195:56250). Sep 5 23:55:10.520288 sshd[6762]: Accepted publickey for core from 139.178.68.195 port 56250 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:10.523293 sshd[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:10.534305 systemd-logind[2117]: New session 15 of user core. Sep 5 23:55:10.538271 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:55:10.805197 sshd[6762]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:10.814259 systemd[1]: sshd@14-172.31.22.93:22-139.178.68.195:56250.service: Deactivated successfully. Sep 5 23:55:10.814882 systemd-logind[2117]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:55:10.820056 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:55:10.824396 systemd-logind[2117]: Removed session 15. Sep 5 23:55:15.839075 systemd[1]: Started sshd@15-172.31.22.93:22-139.178.68.195:56266.service - OpenSSH per-connection server daemon (139.178.68.195:56266). Sep 5 23:55:16.030386 sshd[6776]: Accepted publickey for core from 139.178.68.195 port 56266 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:16.033424 sshd[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:16.045729 systemd-logind[2117]: New session 16 of user core. Sep 5 23:55:16.049707 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:55:16.329766 sshd[6776]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:16.351342 systemd[1]: sshd@15-172.31.22.93:22-139.178.68.195:56266.service: Deactivated successfully. Sep 5 23:55:16.367560 systemd-logind[2117]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:55:16.391076 systemd[1]: Started sshd@16-172.31.22.93:22-139.178.68.195:56270.service - OpenSSH per-connection server daemon (139.178.68.195:56270). Sep 5 23:55:16.392907 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:55:16.396944 systemd-logind[2117]: Removed session 16. Sep 5 23:55:16.618598 sshd[6789]: Accepted publickey for core from 139.178.68.195 port 56270 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:16.624807 sshd[6789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:16.641884 systemd-logind[2117]: New session 17 of user core. Sep 5 23:55:16.651094 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:55:17.317337 sshd[6789]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:17.326115 systemd[1]: sshd@16-172.31.22.93:22-139.178.68.195:56270.service: Deactivated successfully. Sep 5 23:55:17.337583 systemd-logind[2117]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:55:17.338592 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:55:17.352575 systemd[1]: Started sshd@17-172.31.22.93:22-139.178.68.195:56286.service - OpenSSH per-connection server daemon (139.178.68.195:56286). Sep 5 23:55:17.355182 systemd-logind[2117]: Removed session 17. Sep 5 23:55:17.546793 sshd[6801]: Accepted publickey for core from 139.178.68.195 port 56286 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:17.549627 sshd[6801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:17.558037 systemd-logind[2117]: New session 18 of user core. Sep 5 23:55:17.565131 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:55:22.333108 sshd[6801]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:22.349568 systemd-logind[2117]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:55:22.351636 systemd[1]: sshd@17-172.31.22.93:22-139.178.68.195:56286.service: Deactivated successfully. Sep 5 23:55:22.371210 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:55:22.393988 systemd[1]: Started sshd@18-172.31.22.93:22-139.178.68.195:60252.service - OpenSSH per-connection server daemon (139.178.68.195:60252). Sep 5 23:55:22.398701 systemd-logind[2117]: Removed session 18. Sep 5 23:55:22.620563 sshd[6864]: Accepted publickey for core from 139.178.68.195 port 60252 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:22.628646 sshd[6864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:22.647887 systemd-logind[2117]: New session 19 of user core. Sep 5 23:55:22.653733 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:55:22.669353 kubelet[3416]: I0905 23:55:22.669285 3416 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:55:23.656845 sshd[6864]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:23.676349 systemd[1]: sshd@18-172.31.22.93:22-139.178.68.195:60252.service: Deactivated successfully. Sep 5 23:55:23.693874 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:55:23.699600 systemd-logind[2117]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:55:23.722995 systemd[1]: Started sshd@19-172.31.22.93:22-139.178.68.195:60268.service - OpenSSH per-connection server daemon (139.178.68.195:60268). Sep 5 23:55:23.748250 systemd-logind[2117]: Removed session 19. Sep 5 23:55:23.933580 sshd[6891]: Accepted publickey for core from 139.178.68.195 port 60268 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:23.938782 sshd[6891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:23.952629 systemd-logind[2117]: New session 20 of user core. Sep 5 23:55:23.964276 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:55:24.395822 sshd[6891]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:24.404772 systemd-logind[2117]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:55:24.406390 systemd[1]: sshd@19-172.31.22.93:22-139.178.68.195:60268.service: Deactivated successfully. Sep 5 23:55:24.422126 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:55:24.431284 systemd-logind[2117]: Removed session 20. Sep 5 23:55:29.434762 systemd[1]: Started sshd@20-172.31.22.93:22-139.178.68.195:60272.service - OpenSSH per-connection server daemon (139.178.68.195:60272). Sep 5 23:55:29.623372 sshd[6967]: Accepted publickey for core from 139.178.68.195 port 60272 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:29.627863 sshd[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:29.649266 systemd-logind[2117]: New session 21 of user core. Sep 5 23:55:29.653082 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:55:30.014804 sshd[6967]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:30.023999 systemd[1]: sshd@20-172.31.22.93:22-139.178.68.195:60272.service: Deactivated successfully. Sep 5 23:55:30.044368 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:55:30.048973 systemd-logind[2117]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:55:30.053643 systemd-logind[2117]: Removed session 21. Sep 5 23:55:35.049378 systemd[1]: Started sshd@21-172.31.22.93:22-139.178.68.195:57844.service - OpenSSH per-connection server daemon (139.178.68.195:57844). Sep 5 23:55:35.238540 sshd[6981]: Accepted publickey for core from 139.178.68.195 port 57844 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:35.241930 sshd[6981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:35.256241 systemd-logind[2117]: New session 22 of user core. Sep 5 23:55:35.263734 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 23:55:35.605507 sshd[6981]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:35.616631 systemd-logind[2117]: Session 22 logged out. Waiting for processes to exit. Sep 5 23:55:35.624587 systemd[1]: sshd@21-172.31.22.93:22-139.178.68.195:57844.service: Deactivated successfully. Sep 5 23:55:35.634608 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 23:55:35.638683 systemd-logind[2117]: Removed session 22. Sep 5 23:55:40.638236 systemd[1]: Started sshd@22-172.31.22.93:22-139.178.68.195:46928.service - OpenSSH per-connection server daemon (139.178.68.195:46928). Sep 5 23:55:40.816060 sshd[6996]: Accepted publickey for core from 139.178.68.195 port 46928 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:40.819879 sshd[6996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:40.829581 systemd-logind[2117]: New session 23 of user core. Sep 5 23:55:40.840561 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 23:55:41.243834 sshd[6996]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:41.257849 systemd-logind[2117]: Session 23 logged out. Waiting for processes to exit. Sep 5 23:55:41.259040 systemd[1]: sshd@22-172.31.22.93:22-139.178.68.195:46928.service: Deactivated successfully. Sep 5 23:55:41.277942 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 23:55:41.283148 systemd-logind[2117]: Removed session 23. Sep 5 23:55:46.279715 systemd[1]: Started sshd@23-172.31.22.93:22-139.178.68.195:46934.service - OpenSSH per-connection server daemon (139.178.68.195:46934). Sep 5 23:55:46.478511 sshd[7010]: Accepted publickey for core from 139.178.68.195 port 46934 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:46.481255 sshd[7010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:46.501615 systemd-logind[2117]: New session 24 of user core. Sep 5 23:55:46.510396 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 23:55:46.901794 sshd[7010]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:46.915082 systemd[1]: sshd@23-172.31.22.93:22-139.178.68.195:46934.service: Deactivated successfully. Sep 5 23:55:46.937649 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 23:55:46.941271 systemd-logind[2117]: Session 24 logged out. Waiting for processes to exit. Sep 5 23:55:46.948299 systemd-logind[2117]: Removed session 24. Sep 5 23:55:51.102028 containerd[2163]: time="2025-09-05T23:55:51.101386994Z" level=info msg="StopPodSandbox for \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\"" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.194 [WARNING][7056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8aa21a8f-5d63-4c31-ba62-2b91293e20d2", ResourceVersion:"1357", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457", Pod:"goldmane-7988f88666-xnppm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d7bb3a1689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.195 [INFO][7056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.195 [INFO][7056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" iface="eth0" netns="" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.195 [INFO][7056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.195 [INFO][7056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.277 [INFO][7064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.278 [INFO][7064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.278 [INFO][7064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.295 [WARNING][7064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.295 [INFO][7064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.298 [INFO][7064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:51.305948 containerd[2163]: 2025-09-05 23:55:51.301 [INFO][7056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.305948 containerd[2163]: time="2025-09-05T23:55:51.305656419Z" level=info msg="TearDown network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\" successfully" Sep 5 23:55:51.305948 containerd[2163]: time="2025-09-05T23:55:51.305694435Z" level=info msg="StopPodSandbox for \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\" returns successfully" Sep 5 23:55:51.308268 containerd[2163]: time="2025-09-05T23:55:51.307734663Z" level=info msg="RemovePodSandbox for \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\"" Sep 5 23:55:51.308268 containerd[2163]: time="2025-09-05T23:55:51.307972047Z" level=info msg="Forcibly stopping sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\"" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.396 [WARNING][7080] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8aa21a8f-5d63-4c31-ba62-2b91293e20d2", ResourceVersion:"1357", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"d7f11bb91eec1d3977ee0059f706514ee989695d55792d42c1880883d17dd457", Pod:"goldmane-7988f88666-xnppm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d7bb3a1689", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.397 [INFO][7080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.397 [INFO][7080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" iface="eth0" netns="" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.397 [INFO][7080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.397 [INFO][7080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.464 [INFO][7087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.465 [INFO][7087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.465 [INFO][7087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.480 [WARNING][7087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.480 [INFO][7087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" HandleID="k8s-pod-network.6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Workload="ip--172--31--22--93-k8s-goldmane--7988f88666--xnppm-eth0" Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.483 [INFO][7087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:51.497821 containerd[2163]: 2025-09-05 23:55:51.489 [INFO][7080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a" Sep 5 23:55:51.500518 containerd[2163]: time="2025-09-05T23:55:51.498655192Z" level=info msg="TearDown network for sandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\" successfully" Sep 5 23:55:51.519559 containerd[2163]: time="2025-09-05T23:55:51.518239528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:51.520696 containerd[2163]: time="2025-09-05T23:55:51.520616908Z" level=info msg="RemovePodSandbox \"6d7f06c100bd0aaf91fe1f3040514bc1e833417d942f363bdc4d7648f1b3f69a\" returns successfully" Sep 5 23:55:51.521798 containerd[2163]: time="2025-09-05T23:55:51.521564332Z" level=info msg="StopPodSandbox for \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\"" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.660 [WARNING][7101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5c5635c-d655-4227-b462-e9b1f8d42ffd", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d", Pod:"calico-apiserver-565db755f8-lctqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd5eacce1c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.663 [INFO][7101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.663 [INFO][7101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" iface="eth0" netns="" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.663 [INFO][7101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.663 [INFO][7101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.722 [INFO][7109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.726 [INFO][7109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.726 [INFO][7109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.740 [WARNING][7109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.740 [INFO][7109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.745 [INFO][7109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:51.754847 containerd[2163]: 2025-09-05 23:55:51.749 [INFO][7101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.754847 containerd[2163]: time="2025-09-05T23:55:51.754820693Z" level=info msg="TearDown network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\" successfully" Sep 5 23:55:51.755665 containerd[2163]: time="2025-09-05T23:55:51.754866077Z" level=info msg="StopPodSandbox for \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\" returns successfully" Sep 5 23:55:51.758038 containerd[2163]: time="2025-09-05T23:55:51.757977269Z" level=info msg="RemovePodSandbox for \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\"" Sep 5 23:55:51.758233 containerd[2163]: time="2025-09-05T23:55:51.758042597Z" level=info msg="Forcibly stopping sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\"" Sep 5 23:55:51.943328 systemd[1]: Started sshd@24-172.31.22.93:22-139.178.68.195:33582.service - OpenSSH per-connection server daemon (139.178.68.195:33582). Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.848 [WARNING][7124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0", GenerateName:"calico-apiserver-565db755f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5c5635c-d655-4227-b462-e9b1f8d42ffd", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565db755f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"402c1769aef20dfb7c98faca527f1941c0d01233d304e374de5cbf18d703756d", Pod:"calico-apiserver-565db755f8-lctqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd5eacce1c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.848 [INFO][7124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.848 [INFO][7124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" iface="eth0" netns="" Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.848 [INFO][7124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.848 [INFO][7124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.905 [INFO][7131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.906 [INFO][7131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.906 [INFO][7131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.932 [WARNING][7131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.932 [INFO][7131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" HandleID="k8s-pod-network.4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Workload="ip--172--31--22--93-k8s-calico--apiserver--565db755f8--lctqj-eth0" Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.941 [INFO][7131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:51.956568 containerd[2163]: 2025-09-05 23:55:51.948 [INFO][7124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00" Sep 5 23:55:51.956568 containerd[2163]: time="2025-09-05T23:55:51.953764866Z" level=info msg="TearDown network for sandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\" successfully" Sep 5 23:55:51.970498 containerd[2163]: time="2025-09-05T23:55:51.970096218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:51.970498 containerd[2163]: time="2025-09-05T23:55:51.970225146Z" level=info msg="RemovePodSandbox \"4a3a32534f78233e0efea81a7ad4e4840ae07638cc748f36e596b41bbcfd1a00\" returns successfully" Sep 5 23:55:51.972325 containerd[2163]: time="2025-09-05T23:55:51.971972322Z" level=info msg="StopPodSandbox for \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\"" Sep 5 23:55:52.190751 sshd[7137]: Accepted publickey for core from 139.178.68.195 port 33582 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:52.195848 sshd[7137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:52.222065 systemd-logind[2117]: New session 25 of user core. Sep 5 23:55:52.226121 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.148 [WARNING][7148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27939f7c-5277-453f-aea0-098e23380a31", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097", Pod:"csi-node-driver-lfwvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724b3a9fb18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.148 [INFO][7148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.148 [INFO][7148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" iface="eth0" netns="" Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.148 [INFO][7148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.148 [INFO][7148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.316 [INFO][7156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.317 [INFO][7156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.317 [INFO][7156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.354 [WARNING][7156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.355 [INFO][7156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.361 [INFO][7156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:52.398501 containerd[2163]: 2025-09-05 23:55:52.384 [INFO][7148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.398501 containerd[2163]: time="2025-09-05T23:55:52.397934140Z" level=info msg="TearDown network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\" successfully" Sep 5 23:55:52.398501 containerd[2163]: time="2025-09-05T23:55:52.397977076Z" level=info msg="StopPodSandbox for \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\" returns successfully" Sep 5 23:55:52.412000 containerd[2163]: time="2025-09-05T23:55:52.410023696Z" level=info msg="RemovePodSandbox for \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\"" Sep 5 23:55:52.412000 containerd[2163]: time="2025-09-05T23:55:52.410103808Z" level=info msg="Forcibly stopping sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\"" Sep 5 23:55:52.718375 sshd[7137]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.544 [WARNING][7179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27939f7c-5277-453f-aea0-098e23380a31", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"cc822fe459b514050a9d7e804d2e564bec20289c5fb809fc4185ea087caf1097", Pod:"csi-node-driver-lfwvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724b3a9fb18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.547 [INFO][7179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.547 [INFO][7179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" iface="eth0" netns="" Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.547 [INFO][7179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.548 [INFO][7179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.696 [INFO][7188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.696 [INFO][7188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.696 [INFO][7188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.710 [WARNING][7188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.710 [INFO][7188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" HandleID="k8s-pod-network.707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Workload="ip--172--31--22--93-k8s-csi--node--driver--lfwvm-eth0" Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.716 [INFO][7188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:52.723491 containerd[2163]: 2025-09-05 23:55:52.720 [INFO][7179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640" Sep 5 23:55:52.723491 containerd[2163]: time="2025-09-05T23:55:52.723432042Z" level=info msg="TearDown network for sandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\" successfully" Sep 5 23:55:52.732736 containerd[2163]: time="2025-09-05T23:55:52.732607422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:52.732919 containerd[2163]: time="2025-09-05T23:55:52.732780438Z" level=info msg="RemovePodSandbox \"707b3fb4f353e95034e408c7f37125cd5d9d21ae4191d92652e8f36799ffc640\" returns successfully" Sep 5 23:55:52.738495 containerd[2163]: time="2025-09-05T23:55:52.737857158Z" level=info msg="StopPodSandbox for \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\"" Sep 5 23:55:52.741003 systemd[1]: sshd@24-172.31.22.93:22-139.178.68.195:33582.service: Deactivated successfully. Sep 5 23:55:52.756085 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 23:55:52.763091 systemd-logind[2117]: Session 25 logged out. Waiting for processes to exit. Sep 5 23:55:52.770909 systemd-logind[2117]: Removed session 25. Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.840 [WARNING][7205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b02e4030-fdc9-4a12-bd86-85df0b683a74", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993", Pod:"coredns-7c65d6cfc9-m5jqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b3671f1893", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.840 [INFO][7205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.840 [INFO][7205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" iface="eth0" netns="" Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.840 [INFO][7205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.840 [INFO][7205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.906 [INFO][7213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.908 [INFO][7213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.908 [INFO][7213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.931 [WARNING][7213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.931 [INFO][7213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.934 [INFO][7213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:52.947020 containerd[2163]: 2025-09-05 23:55:52.941 [INFO][7205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:52.953698 containerd[2163]: time="2025-09-05T23:55:52.948843139Z" level=info msg="TearDown network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\" successfully" Sep 5 23:55:52.953698 containerd[2163]: time="2025-09-05T23:55:52.949755931Z" level=info msg="StopPodSandbox for \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\" returns successfully" Sep 5 23:55:52.953698 containerd[2163]: time="2025-09-05T23:55:52.951766783Z" level=info msg="RemovePodSandbox for \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\"" Sep 5 23:55:52.953698 containerd[2163]: time="2025-09-05T23:55:52.951821107Z" level=info msg="Forcibly stopping sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\"" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.162 [WARNING][7228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b02e4030-fdc9-4a12-bd86-85df0b683a74", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"d66feeb78cb06ea7a37fe4bd17996b57eb83b7c1d8bd85d1d08fcfa2dfa88993", Pod:"coredns-7c65d6cfc9-m5jqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b3671f1893", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.163 [INFO][7228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.163 [INFO][7228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" iface="eth0" netns="" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.163 [INFO][7228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.163 [INFO][7228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.231 [INFO][7237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.232 [INFO][7237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.232 [INFO][7237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.259 [WARNING][7237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.260 [INFO][7237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" HandleID="k8s-pod-network.c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--m5jqb-eth0" Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.266 [INFO][7237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:53.275622 containerd[2163]: 2025-09-05 23:55:53.270 [INFO][7228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4" Sep 5 23:55:53.279585 containerd[2163]: time="2025-09-05T23:55:53.277134484Z" level=info msg="TearDown network for sandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\" successfully" Sep 5 23:55:53.290370 containerd[2163]: time="2025-09-05T23:55:53.290259401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:53.290680 containerd[2163]: time="2025-09-05T23:55:53.290405069Z" level=info msg="RemovePodSandbox \"c7fb50ea96c5e4778eb02409f2ff6ce10a756326dea227394adc14e31b7d90d4\" returns successfully" Sep 5 23:55:53.294234 containerd[2163]: time="2025-09-05T23:55:53.294140729Z" level=info msg="StopPodSandbox for \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\"" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.474 [WARNING][7252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b9777b13-1b79-4ea7-958f-63691e6fecb7", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12", Pod:"coredns-7c65d6cfc9-bn4jv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8795ad63e11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.476 [INFO][7252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.476 [INFO][7252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" iface="eth0" netns="" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.478 [INFO][7252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.478 [INFO][7252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.542 [INFO][7260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.543 [INFO][7260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.543 [INFO][7260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.558 [WARNING][7260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.558 [INFO][7260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.561 [INFO][7260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:53.574601 containerd[2163]: 2025-09-05 23:55:53.566 [INFO][7252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.574601 containerd[2163]: time="2025-09-05T23:55:53.573712614Z" level=info msg="TearDown network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\" successfully" Sep 5 23:55:53.574601 containerd[2163]: time="2025-09-05T23:55:53.573751926Z" level=info msg="StopPodSandbox for \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\" returns successfully" Sep 5 23:55:53.579279 containerd[2163]: time="2025-09-05T23:55:53.576904182Z" level=info msg="RemovePodSandbox for \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\"" Sep 5 23:55:53.579279 containerd[2163]: time="2025-09-05T23:55:53.576958050Z" level=info msg="Forcibly stopping sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\"" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.704 [WARNING][7274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b9777b13-1b79-4ea7-958f-63691e6fecb7", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-93", ContainerID:"2ac68250f4798ac3636be4a8bf4dea9b2b8c2c7ee12886ee0c2e88b78e6eff12", Pod:"coredns-7c65d6cfc9-bn4jv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8795ad63e11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.705 [INFO][7274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.706 [INFO][7274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" iface="eth0" netns="" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.706 [INFO][7274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.706 [INFO][7274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.764 [INFO][7281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.765 [INFO][7281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.765 [INFO][7281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.785 [WARNING][7281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.786 [INFO][7281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" HandleID="k8s-pod-network.acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Workload="ip--172--31--22--93-k8s-coredns--7c65d6cfc9--bn4jv-eth0" Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.789 [INFO][7281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:53.800275 containerd[2163]: 2025-09-05 23:55:53.794 [INFO][7274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090" Sep 5 23:55:53.805840 containerd[2163]: time="2025-09-05T23:55:53.804586435Z" level=info msg="TearDown network for sandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\" successfully" Sep 5 23:55:53.842619 containerd[2163]: time="2025-09-05T23:55:53.840945751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:53.842951 containerd[2163]: time="2025-09-05T23:55:53.842899951Z" level=info msg="RemovePodSandbox \"acd8632e4ea19eceb8f546e2c84325cdbeba0ee838a44a545191cf106304e090\" returns successfully" Sep 5 23:55:57.751279 systemd[1]: Started sshd@25-172.31.22.93:22-139.178.68.195:33588.service - OpenSSH per-connection server daemon (139.178.68.195:33588). Sep 5 23:55:58.006599 sshd[7287]: Accepted publickey for core from 139.178.68.195 port 33588 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:58.011274 sshd[7287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:58.032865 systemd-logind[2117]: New session 26 of user core. Sep 5 23:55:58.042627 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 23:55:58.370517 systemd[1]: run-containerd-runc-k8s.io-b42124cb19e090c3ac689ad247b425c35bd6ce3a8422bc757654d43346545ff6-runc.vlr7mC.mount: Deactivated successfully. Sep 5 23:55:58.513973 sshd[7287]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:58.533619 systemd[1]: sshd@25-172.31.22.93:22-139.178.68.195:33588.service: Deactivated successfully. Sep 5 23:55:58.547766 systemd-logind[2117]: Session 26 logged out. Waiting for processes to exit. Sep 5 23:55:58.549650 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 23:55:58.556552 systemd-logind[2117]: Removed session 26. Sep 5 23:56:13.485771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c00c5f9057deb8bcce6ed5dea13c82a1639a45b2eecc7942b62e070da912c51-rootfs.mount: Deactivated successfully. Sep 5 23:56:13.495066 containerd[2163]: time="2025-09-05T23:56:13.480709717Z" level=info msg="shim disconnected" id=7c00c5f9057deb8bcce6ed5dea13c82a1639a45b2eecc7942b62e070da912c51 namespace=k8s.io Sep 5 23:56:13.495066 containerd[2163]: time="2025-09-05T23:56:13.494725501Z" level=warning msg="cleaning up after shim disconnected" id=7c00c5f9057deb8bcce6ed5dea13c82a1639a45b2eecc7942b62e070da912c51 namespace=k8s.io Sep 5 23:56:13.495066 containerd[2163]: time="2025-09-05T23:56:13.494759989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:13.872747 containerd[2163]: time="2025-09-05T23:56:13.872206935Z" level=info msg="shim disconnected" id=b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468 namespace=k8s.io Sep 5 23:56:13.874939 containerd[2163]: time="2025-09-05T23:56:13.874490739Z" level=warning msg="cleaning up after shim disconnected" id=b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468 namespace=k8s.io Sep 5 23:56:13.874939 containerd[2163]: time="2025-09-05T23:56:13.874568427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:13.879957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468-rootfs.mount: Deactivated successfully. Sep 5 23:56:14.536560 kubelet[3416]: I0905 23:56:14.536504 3416 scope.go:117] "RemoveContainer" containerID="b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468" Sep 5 23:56:14.541913 kubelet[3416]: I0905 23:56:14.541642 3416 scope.go:117] "RemoveContainer" containerID="7c00c5f9057deb8bcce6ed5dea13c82a1639a45b2eecc7942b62e070da912c51" Sep 5 23:56:14.543992 containerd[2163]: time="2025-09-05T23:56:14.543919790Z" level=info msg="CreateContainer within sandbox \"f9e2d25e9f10b55d00d7374727841f3aa5cc3fa3481f7b2fcc99c3883e61e8d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 5 23:56:14.548583 containerd[2163]: time="2025-09-05T23:56:14.547266914Z" level=info msg="CreateContainer within sandbox \"5b46b5393ad6b85e8835065d3471cafff5e5aae715ebd951167348b244db3a0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 5 23:56:14.579872 containerd[2163]: time="2025-09-05T23:56:14.579692090Z" level=info msg="CreateContainer within sandbox \"f9e2d25e9f10b55d00d7374727841f3aa5cc3fa3481f7b2fcc99c3883e61e8d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"de74d230f501498e16749de7f531c7ed33808a217e77ac58284d30efb2b7479c\"" Sep 5 23:56:14.582580 containerd[2163]: time="2025-09-05T23:56:14.582510074Z" level=info msg="StartContainer for \"de74d230f501498e16749de7f531c7ed33808a217e77ac58284d30efb2b7479c\"" Sep 5 23:56:14.586256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293548895.mount: Deactivated successfully. Sep 5 23:56:14.589942 containerd[2163]: time="2025-09-05T23:56:14.587565734Z" level=info msg="CreateContainer within sandbox \"5b46b5393ad6b85e8835065d3471cafff5e5aae715ebd951167348b244db3a0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a319c20586daa37a03fc6c9007f496933eff46592dbaa450b4357096422d7add\"" Sep 5 23:56:14.589942 containerd[2163]: time="2025-09-05T23:56:14.589407494Z" level=info msg="StartContainer for \"a319c20586daa37a03fc6c9007f496933eff46592dbaa450b4357096422d7add\"" Sep 5 23:56:14.769365 containerd[2163]: time="2025-09-05T23:56:14.768571131Z" level=info msg="StartContainer for \"de74d230f501498e16749de7f531c7ed33808a217e77ac58284d30efb2b7479c\" returns successfully" Sep 5 23:56:14.827933 containerd[2163]: time="2025-09-05T23:56:14.827250832Z" level=info msg="StartContainer for \"a319c20586daa37a03fc6c9007f496933eff46592dbaa450b4357096422d7add\" returns successfully" Sep 5 23:56:17.113545 containerd[2163]: time="2025-09-05T23:56:17.105904407Z" level=info msg="shim disconnected" id=a6d048acfa1c9e13eca51199790c5a380fc85bfafd691bcace72db54ad9d7fed namespace=k8s.io Sep 5 23:56:17.113545 containerd[2163]: time="2025-09-05T23:56:17.105995259Z" level=warning msg="cleaning up after shim disconnected" id=a6d048acfa1c9e13eca51199790c5a380fc85bfafd691bcace72db54ad9d7fed namespace=k8s.io Sep 5 23:56:17.113545 containerd[2163]: time="2025-09-05T23:56:17.106018167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:17.115238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6d048acfa1c9e13eca51199790c5a380fc85bfafd691bcace72db54ad9d7fed-rootfs.mount: Deactivated successfully. Sep 5 23:56:17.612026 kubelet[3416]: I0905 23:56:17.611934 3416 scope.go:117] "RemoveContainer" containerID="a6d048acfa1c9e13eca51199790c5a380fc85bfafd691bcace72db54ad9d7fed" Sep 5 23:56:17.617557 containerd[2163]: time="2025-09-05T23:56:17.617506541Z" level=info msg="CreateContainer within sandbox \"69b34af34a21aa139d31362d9abfab244bd37f5da7b6d9c942541ce027765ff7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 5 23:56:17.646820 containerd[2163]: time="2025-09-05T23:56:17.646576650Z" level=info msg="CreateContainer within sandbox \"69b34af34a21aa139d31362d9abfab244bd37f5da7b6d9c942541ce027765ff7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7ddc3e290de63b082dd723290960c05657f33c9b971062906d8f10b9fe372fdc\"" Sep 5 23:56:17.649792 containerd[2163]: time="2025-09-05T23:56:17.649450674Z" level=info msg="StartContainer for \"7ddc3e290de63b082dd723290960c05657f33c9b971062906d8f10b9fe372fdc\"" Sep 5 23:56:17.790147 containerd[2163]: time="2025-09-05T23:56:17.789714678Z" level=info msg="StartContainer for \"7ddc3e290de63b082dd723290960c05657f33c9b971062906d8f10b9fe372fdc\" returns successfully" Sep 5 23:56:20.500199 kubelet[3416]: E0905 23:56:20.499992 3416 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-93?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 5 23:56:26.288305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de74d230f501498e16749de7f531c7ed33808a217e77ac58284d30efb2b7479c-rootfs.mount: Deactivated successfully. Sep 5 23:56:26.299028 containerd[2163]: time="2025-09-05T23:56:26.298723753Z" level=info msg="shim disconnected" id=de74d230f501498e16749de7f531c7ed33808a217e77ac58284d30efb2b7479c namespace=k8s.io Sep 5 23:56:26.299028 containerd[2163]: time="2025-09-05T23:56:26.298806697Z" level=warning msg="cleaning up after shim disconnected" id=de74d230f501498e16749de7f531c7ed33808a217e77ac58284d30efb2b7479c namespace=k8s.io Sep 5 23:56:26.299028 containerd[2163]: time="2025-09-05T23:56:26.298829713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:26.657211 kubelet[3416]: I0905 23:56:26.657066 3416 scope.go:117] "RemoveContainer" containerID="b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468" Sep 5 23:56:26.659989 kubelet[3416]: I0905 23:56:26.658107 3416 scope.go:117] "RemoveContainer" containerID="de74d230f501498e16749de7f531c7ed33808a217e77ac58284d30efb2b7479c" Sep 5 23:56:26.661097 containerd[2163]: time="2025-09-05T23:56:26.661020038Z" level=info msg="RemoveContainer for \"b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468\"" Sep 5 23:56:26.661564 kubelet[3416]: E0905 23:56:26.661504 3416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-n2vld_tigera-operator(416c1b11-871f-4b02-bdbb-4827f5f1e9c7)\"" pod="tigera-operator/tigera-operator-58fc44c59b-n2vld" podUID="416c1b11-871f-4b02-bdbb-4827f5f1e9c7" Sep 5 23:56:26.668269 containerd[2163]: time="2025-09-05T23:56:26.668194538Z" level=info msg="RemoveContainer for \"b2d87e0cb9ecce4947152fe24eef78c72788a04507e4c78e45c7236759b04468\" returns successfully" Sep 5 23:56:28.292216 systemd[1]: run-containerd-runc-k8s.io-33316124f948f540ee47977dfe2c10a2a7e5b6a914988f0d72d8da655edfad20-runc.IwqZfg.mount: Deactivated successfully. Sep 5 23:56:30.500918 kubelet[3416]: E0905 23:56:30.500836 3416 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"