Sep 12 17:09:46.241356 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 17:09:46.241417 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:09:46.241443 kernel: KASLR disabled due to lack of seed Sep 12 17:09:46.241460 kernel: efi: EFI v2.7 by EDK II Sep 12 17:09:46.241476 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 17:09:46.241492 kernel: ACPI: Early table checksum verification disabled Sep 12 17:09:46.241509 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 17:09:46.241525 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:09:46.242402 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:09:46.242427 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 17:09:46.242453 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:09:46.242469 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 17:09:46.242485 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 17:09:46.242502 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 17:09:46.242521 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:09:46.242574 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 17:09:46.242605 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 17:09:46.242651 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 17:09:46.242675 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 17:09:46.242692 kernel: printk: bootconsole [uart0] enabled Sep 12 17:09:46.242709 kernel: NUMA: Failed to initialise from firmware Sep 12 17:09:46.242728 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:09:46.242744 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 17:09:46.242761 kernel: Zone ranges: Sep 12 17:09:46.242778 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:09:46.242795 kernel: DMA32 empty Sep 12 17:09:46.242819 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 17:09:46.242836 kernel: Movable zone start for each node Sep 12 17:09:46.242852 kernel: Early memory node ranges Sep 12 17:09:46.242869 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 17:09:46.242886 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 17:09:46.242902 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 17:09:46.242919 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 17:09:46.242935 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 17:09:46.242951 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 17:09:46.242967 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 17:09:46.242984 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 17:09:46.243000 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:09:46.243021 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 17:09:46.243039 kernel: psci: probing for conduit method from ACPI. Sep 12 17:09:46.243064 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 17:09:46.243082 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:09:46.243099 kernel: psci: Trusted OS migration not required Sep 12 17:09:46.243121 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:09:46.243139 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 17:09:46.243157 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:09:46.243174 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:09:46.243192 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:09:46.243210 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:09:46.243227 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:09:46.243245 kernel: CPU features: detected: Spectre-v2 Sep 12 17:09:46.243262 kernel: CPU features: detected: Spectre-v3a Sep 12 17:09:46.243280 kernel: CPU features: detected: Spectre-BHB Sep 12 17:09:46.243297 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 17:09:46.243319 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 17:09:46.243336 kernel: alternatives: applying boot alternatives Sep 12 17:09:46.243356 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:46.243375 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:09:46.243393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:09:46.243410 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:09:46.243427 kernel: Fallback order for Node 0: 0 Sep 12 17:09:46.243444 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 17:09:46.243462 kernel: Policy zone: Normal Sep 12 17:09:46.243479 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:09:46.243496 kernel: software IO TLB: area num 2. Sep 12 17:09:46.243518 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 17:09:46.243569 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 17:09:46.243592 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:09:46.243610 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:09:46.243629 kernel: rcu: RCU event tracing is enabled. Sep 12 17:09:46.243647 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:09:46.243665 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:09:46.243683 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:09:46.243701 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:09:46.243718 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:09:46.243737 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:09:46.243761 kernel: GICv3: 96 SPIs implemented Sep 12 17:09:46.243780 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:09:46.243797 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:09:46.243815 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 17:09:46.243833 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 17:09:46.243851 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 17:09:46.243868 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:09:46.243886 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:09:46.243904 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 17:09:46.243921 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 17:09:46.243939 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 17:09:46.243956 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:09:46.243978 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 17:09:46.243997 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 17:09:46.244014 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 17:09:46.244032 kernel: Console: colour dummy device 80x25 Sep 12 17:09:46.244050 kernel: printk: console [tty1] enabled Sep 12 17:09:46.244069 kernel: ACPI: Core revision 20230628 Sep 12 17:09:46.244088 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 17:09:46.244106 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:09:46.244125 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:09:46.244148 kernel: landlock: Up and running. Sep 12 17:09:46.244166 kernel: SELinux: Initializing. Sep 12 17:09:46.244184 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:46.244203 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:46.244221 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:09:46.244239 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:09:46.244257 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:09:46.244275 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:09:46.244293 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 17:09:46.244315 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 17:09:46.244333 kernel: Remapping and enabling EFI services. Sep 12 17:09:46.244351 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:09:46.244369 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:09:46.244388 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 17:09:46.244406 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 17:09:46.244424 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 17:09:46.244442 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:09:46.244460 kernel: SMP: Total of 2 processors activated. Sep 12 17:09:46.244478 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:09:46.244501 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 17:09:46.244519 kernel: CPU features: detected: CRC32 instructions Sep 12 17:09:46.251629 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:09:46.251665 kernel: alternatives: applying system-wide alternatives Sep 12 17:09:46.251685 kernel: devtmpfs: initialized Sep 12 17:09:46.251705 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:09:46.251724 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:09:46.251743 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:09:46.251762 kernel: SMBIOS 3.0.0 present. Sep 12 17:09:46.251786 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 17:09:46.251805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:09:46.251824 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:09:46.251843 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:09:46.251862 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:09:46.251881 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:09:46.251900 kernel: audit: type=2000 audit(0.315:1): state=initialized audit_enabled=0 res=1 Sep 12 17:09:46.251923 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:09:46.251942 kernel: cpuidle: using governor menu Sep 12 17:09:46.251961 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:09:46.251980 kernel: ASID allocator initialised with 65536 entries Sep 12 17:09:46.252001 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:09:46.252020 kernel: Serial: AMBA PL011 UART driver Sep 12 17:09:46.252040 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 17:09:46.252059 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:09:46.252079 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:09:46.252104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:09:46.252124 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:09:46.252143 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:09:46.252162 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:09:46.252181 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:09:46.252200 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:09:46.252220 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:09:46.252240 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:09:46.252258 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:09:46.252282 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:09:46.252302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:09:46.252322 kernel: ACPI: Interpreter enabled Sep 12 17:09:46.252341 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:09:46.252361 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:09:46.252380 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 17:09:46.253837 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:09:46.254078 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:09:46.254302 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:09:46.254514 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 17:09:46.255842 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 17:09:46.255877 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 17:09:46.255897 kernel: acpiphp: Slot [1] registered Sep 12 17:09:46.255917 kernel: acpiphp: Slot [2] registered Sep 12 17:09:46.255936 kernel: acpiphp: Slot [3] registered Sep 12 17:09:46.255955 kernel: acpiphp: Slot [4] registered Sep 12 17:09:46.255984 kernel: acpiphp: Slot [5] registered Sep 12 17:09:46.256004 kernel: acpiphp: Slot [6] registered Sep 12 17:09:46.256023 kernel: acpiphp: Slot [7] registered Sep 12 17:09:46.256042 kernel: acpiphp: Slot [8] registered Sep 12 17:09:46.256061 kernel: acpiphp: Slot [9] registered Sep 12 17:09:46.256080 kernel: acpiphp: Slot [10] registered Sep 12 17:09:46.256099 kernel: acpiphp: Slot [11] registered Sep 12 17:09:46.256118 kernel: acpiphp: Slot [12] registered Sep 12 17:09:46.256137 kernel: acpiphp: Slot [13] registered Sep 12 17:09:46.256155 kernel: acpiphp: Slot [14] registered Sep 12 17:09:46.256179 kernel: acpiphp: Slot [15] registered Sep 12 17:09:46.256197 kernel: acpiphp: Slot [16] registered Sep 12 17:09:46.256216 kernel: acpiphp: Slot [17] registered Sep 12 17:09:46.256234 kernel: acpiphp: Slot [18] registered Sep 12 17:09:46.256253 kernel: acpiphp: Slot [19] registered Sep 12 17:09:46.256272 kernel: acpiphp: Slot [20] registered Sep 12 17:09:46.256290 kernel: acpiphp: Slot [21] registered Sep 12 17:09:46.256309 kernel: acpiphp: Slot [22] registered Sep 12 17:09:46.256328 kernel: acpiphp: Slot [23] registered Sep 12 17:09:46.256352 kernel: acpiphp: Slot [24] registered Sep 12 17:09:46.256370 kernel: acpiphp: Slot [25] registered Sep 12 17:09:46.256389 kernel: acpiphp: Slot [26] registered Sep 12 17:09:46.256407 kernel: acpiphp: Slot [27] registered Sep 12 17:09:46.256425 kernel: acpiphp: Slot [28] registered Sep 12 17:09:46.256444 kernel: acpiphp: Slot [29] registered Sep 12 17:09:46.256462 kernel: acpiphp: Slot [30] registered Sep 12 17:09:46.256480 kernel: acpiphp: Slot [31] registered Sep 12 17:09:46.256499 kernel: PCI host bridge to bus 0000:00 Sep 12 17:09:46.258878 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 17:09:46.259104 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:09:46.259294 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 17:09:46.259503 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 17:09:46.259799 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 17:09:46.260038 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 17:09:46.260248 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 17:09:46.260480 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:09:46.260723 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 17:09:46.260961 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:09:46.261188 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:09:46.261401 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 17:09:46.264802 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 17:09:46.265060 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 17:09:46.265271 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:09:46.265481 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 17:09:46.265731 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 17:09:46.265947 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 17:09:46.266158 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 17:09:46.266378 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 17:09:46.267710 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 17:09:46.267944 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:09:46.268137 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 17:09:46.268164 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:09:46.268184 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:09:46.268204 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:09:46.268222 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:09:46.268241 kernel: iommu: Default domain type: Translated Sep 12 17:09:46.268260 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:09:46.268285 kernel: efivars: Registered efivars operations Sep 12 17:09:46.268304 kernel: vgaarb: loaded Sep 12 17:09:46.268323 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:09:46.268342 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:09:46.268360 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:09:46.268379 kernel: pnp: PnP ACPI init Sep 12 17:09:46.268658 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 17:09:46.268692 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:09:46.268720 kernel: NET: Registered PF_INET protocol family Sep 12 17:09:46.268740 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:09:46.268760 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:09:46.268806 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:09:46.268827 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:09:46.268847 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:09:46.268868 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:09:46.268888 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:46.268907 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:46.268935 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:09:46.268955 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:09:46.268975 kernel: kvm [1]: HYP mode not available Sep 12 17:09:46.268994 kernel: Initialise system trusted keyrings Sep 12 17:09:46.269013 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:09:46.269032 kernel: Key type asymmetric registered Sep 12 17:09:46.269053 kernel: Asymmetric key parser 'x509' registered Sep 12 17:09:46.269074 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:09:46.269093 kernel: io scheduler mq-deadline registered Sep 12 17:09:46.269121 kernel: io scheduler kyber registered Sep 12 17:09:46.269140 kernel: io scheduler bfq registered Sep 12 17:09:46.269430 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 17:09:46.269465 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:09:46.269485 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:09:46.269507 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 17:09:46.269526 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 17:09:46.271664 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:09:46.271702 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:09:46.271983 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 17:09:46.272012 kernel: printk: console [ttyS0] disabled Sep 12 17:09:46.272031 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 17:09:46.272050 kernel: printk: console [ttyS0] enabled Sep 12 17:09:46.272069 kernel: printk: bootconsole [uart0] disabled Sep 12 17:09:46.272087 kernel: thunder_xcv, ver 1.0 Sep 12 17:09:46.272105 kernel: thunder_bgx, ver 1.0 Sep 12 17:09:46.272124 kernel: nicpf, ver 1.0 Sep 12 17:09:46.272149 kernel: nicvf, ver 1.0 Sep 12 17:09:46.272380 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:09:46.272601 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:09:45 UTC (1757696985) Sep 12 17:09:46.272628 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:09:46.272648 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 17:09:46.272667 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:09:46.272686 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:09:46.272704 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:09:46.272729 kernel: Segment Routing with IPv6 Sep 12 17:09:46.272748 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:09:46.272782 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:09:46.272806 kernel: Key type dns_resolver registered Sep 12 17:09:46.272825 kernel: registered taskstats version 1 Sep 12 17:09:46.272843 kernel: Loading compiled-in X.509 certificates Sep 12 17:09:46.272862 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:09:46.272880 kernel: Key type .fscrypt registered Sep 12 17:09:46.272898 kernel: Key type fscrypt-provisioning registered Sep 12 17:09:46.272922 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:09:46.272941 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:09:46.272960 kernel: ima: No architecture policies found Sep 12 17:09:46.272978 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:09:46.272997 kernel: clk: Disabling unused clocks Sep 12 17:09:46.273015 kernel: Freeing unused kernel memory: 39488K Sep 12 17:09:46.273034 kernel: Run /init as init process Sep 12 17:09:46.273052 kernel: with arguments: Sep 12 17:09:46.273071 kernel: /init Sep 12 17:09:46.273089 kernel: with environment: Sep 12 17:09:46.273112 kernel: HOME=/ Sep 12 17:09:46.273130 kernel: TERM=linux Sep 12 17:09:46.273148 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:09:46.273171 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:46.273195 systemd[1]: Detected virtualization amazon. Sep 12 17:09:46.273215 systemd[1]: Detected architecture arm64. Sep 12 17:09:46.273235 systemd[1]: Running in initrd. Sep 12 17:09:46.273259 systemd[1]: No hostname configured, using default hostname. Sep 12 17:09:46.273279 systemd[1]: Hostname set to . Sep 12 17:09:46.273300 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:46.273320 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:09:46.273340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:46.273361 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:46.273382 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:09:46.273403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:46.273428 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:09:46.273449 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:09:46.273473 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:09:46.273493 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:09:46.273514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:46.275568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:46.275607 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:09:46.275635 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:46.275657 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:46.275678 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:09:46.275698 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:46.275718 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:46.275739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:09:46.275760 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:09:46.275780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:46.275800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:46.275826 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:46.275846 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:09:46.275867 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:09:46.275887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:46.275908 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:09:46.275928 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:09:46.275948 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:46.275969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:46.275994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:46.276015 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:46.276035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:46.276111 systemd-journald[251]: Collecting audit messages is disabled. Sep 12 17:09:46.276162 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:09:46.276185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:46.276206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:09:46.276226 systemd-journald[251]: Journal started Sep 12 17:09:46.276267 systemd-journald[251]: Runtime Journal (/run/log/journal/ec29fa6e28bbbf11bced4ff938a382ee) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:09:46.229976 systemd-modules-load[252]: Inserted module 'overlay' Sep 12 17:09:46.289162 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:46.289207 kernel: Bridge firewalling registered Sep 12 17:09:46.292409 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 12 17:09:46.295445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:46.301172 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:46.307403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:46.319858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:46.330458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:46.332843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:46.345760 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:09:46.372471 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:46.394210 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:09:46.403598 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:46.410635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:46.421234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:46.430844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:09:46.453771 dracut-cmdline[282]: dracut-dracut-053 Sep 12 17:09:46.460721 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:46.530146 systemd-resolved[289]: Positive Trust Anchors: Sep 12 17:09:46.530174 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:09:46.530234 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:09:46.647835 kernel: SCSI subsystem initialized Sep 12 17:09:46.655653 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:09:46.667655 kernel: iscsi: registered transport (tcp) Sep 12 17:09:46.690655 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:09:46.690729 kernel: QLogic iSCSI HBA Driver Sep 12 17:09:46.764958 kernel: random: crng init done Sep 12 17:09:46.765273 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 12 17:09:46.768051 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:09:46.774671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:46.805079 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:46.816862 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:09:46.853028 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:09:46.853120 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:09:46.854954 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:09:46.921598 kernel: raid6: neonx8 gen() 6764 MB/s Sep 12 17:09:46.938583 kernel: raid6: neonx4 gen() 6559 MB/s Sep 12 17:09:46.955582 kernel: raid6: neonx2 gen() 5447 MB/s Sep 12 17:09:46.972577 kernel: raid6: neonx1 gen() 3950 MB/s Sep 12 17:09:46.989584 kernel: raid6: int64x8 gen() 3805 MB/s Sep 12 17:09:47.006584 kernel: raid6: int64x4 gen() 3713 MB/s Sep 12 17:09:47.023581 kernel: raid6: int64x2 gen() 3610 MB/s Sep 12 17:09:47.041575 kernel: raid6: int64x1 gen() 2758 MB/s Sep 12 17:09:47.041630 kernel: raid6: using algorithm neonx8 gen() 6764 MB/s Sep 12 17:09:47.059585 kernel: raid6: .... xor() 4809 MB/s, rmw enabled Sep 12 17:09:47.059654 kernel: raid6: using neon recovery algorithm Sep 12 17:09:47.068575 kernel: xor: measuring software checksum speed Sep 12 17:09:47.068641 kernel: 8regs : 10205 MB/sec Sep 12 17:09:47.071914 kernel: 32regs : 10986 MB/sec Sep 12 17:09:47.071946 kernel: arm64_neon : 9355 MB/sec Sep 12 17:09:47.071971 kernel: xor: using function: 32regs (10986 MB/sec) Sep 12 17:09:47.157583 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:09:47.177006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:47.188885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:47.233799 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 12 17:09:47.242517 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:47.257922 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:09:47.294488 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 12 17:09:47.352428 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:47.363802 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:47.491427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:47.507951 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:09:47.564359 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:47.567345 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:47.567453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:47.568176 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:47.592511 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:09:47.635003 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:47.719037 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:09:47.719108 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 17:09:47.725945 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:09:47.726258 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:09:47.732611 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:40:fb:c2:99:b1 Sep 12 17:09:47.734948 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:09:47.750207 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:47.750459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:47.755772 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:47.758230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:47.758507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:47.763305 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:47.784937 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:09:47.785008 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:09:47.785718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:47.802562 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:09:47.812707 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:09:47.812801 kernel: GPT:9289727 != 16777215 Sep 12 17:09:47.812833 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:09:47.812868 kernel: GPT:9289727 != 16777215 Sep 12 17:09:47.812895 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:09:47.812919 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:47.827381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:47.845905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:47.897604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:47.932446 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (531) Sep 12 17:09:47.966564 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (521) Sep 12 17:09:47.980479 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:09:48.038776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:09:48.068747 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:09:48.082627 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:09:48.085768 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:09:48.099947 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:09:48.115094 disk-uuid[662]: Primary Header is updated. Sep 12 17:09:48.115094 disk-uuid[662]: Secondary Entries is updated. Sep 12 17:09:48.115094 disk-uuid[662]: Secondary Header is updated. Sep 12 17:09:48.133572 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:48.141211 kernel: GPT:disk_guids don't match. Sep 12 17:09:48.141272 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:09:48.141298 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:48.150570 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:49.151972 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:49.152057 disk-uuid[663]: The operation has completed successfully. Sep 12 17:09:49.346886 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:09:49.349060 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:09:49.405854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:09:49.429227 sh[1007]: Success Sep 12 17:09:49.456587 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:09:49.575857 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:09:49.584787 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:09:49.590897 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:09:49.633580 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:09:49.633663 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:49.633692 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:09:49.635475 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:09:49.636911 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:09:49.747589 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:09:49.787866 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:09:49.792431 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:09:49.802850 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:09:49.814105 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:09:49.853674 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:49.853748 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:49.855198 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:49.873613 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:49.895209 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:09:49.900613 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:49.911165 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:09:49.922942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:09:50.024625 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:50.039878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:09:50.096772 systemd-networkd[1199]: lo: Link UP Sep 12 17:09:50.097326 systemd-networkd[1199]: lo: Gained carrier Sep 12 17:09:50.101498 systemd-networkd[1199]: Enumeration completed Sep 12 17:09:50.102720 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:09:50.105329 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:50.105337 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:09:50.120060 systemd[1]: Reached target network.target - Network. Sep 12 17:09:50.122826 systemd-networkd[1199]: eth0: Link UP Sep 12 17:09:50.122835 systemd-networkd[1199]: eth0: Gained carrier Sep 12 17:09:50.122854 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:50.143685 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.22.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:09:50.393493 ignition[1132]: Ignition 2.19.0 Sep 12 17:09:50.393515 ignition[1132]: Stage: fetch-offline Sep 12 17:09:50.397494 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:50.397581 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:50.399342 ignition[1132]: Ignition finished successfully Sep 12 17:09:50.406665 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:50.417895 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:09:50.455160 ignition[1210]: Ignition 2.19.0 Sep 12 17:09:50.456523 ignition[1210]: Stage: fetch Sep 12 17:09:50.458362 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:50.458389 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:50.458588 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:50.482042 ignition[1210]: PUT result: OK Sep 12 17:09:50.488726 ignition[1210]: parsed url from cmdline: "" Sep 12 17:09:50.488742 ignition[1210]: no config URL provided Sep 12 17:09:50.488761 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:09:50.488787 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:09:50.488836 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:50.493316 ignition[1210]: PUT result: OK Sep 12 17:09:50.495337 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:09:50.499268 ignition[1210]: GET result: OK Sep 12 17:09:50.501719 ignition[1210]: parsing config with SHA512: f73090c516cf8238d1d94cc1796765711bd42e6a3a6903e24e58fe8dd0c388d989c3b1ba6b70c4b7e5291a9e9fa0ac0e3ed99cb7241f096208cdad1e8d59ea00 Sep 12 17:09:50.511028 unknown[1210]: fetched base config from "system" Sep 12 17:09:50.511087 unknown[1210]: fetched base config from "system" Sep 12 17:09:50.512755 ignition[1210]: fetch: fetch complete Sep 12 17:09:50.511102 unknown[1210]: fetched user config from "aws" Sep 12 17:09:50.512769 ignition[1210]: fetch: fetch passed Sep 12 17:09:50.520425 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:09:50.512912 ignition[1210]: Ignition finished successfully Sep 12 17:09:50.533970 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:09:50.570732 ignition[1216]: Ignition 2.19.0 Sep 12 17:09:50.570759 ignition[1216]: Stage: kargs Sep 12 17:09:50.572589 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:50.572620 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:50.572875 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:50.582318 ignition[1216]: PUT result: OK Sep 12 17:09:50.587809 ignition[1216]: kargs: kargs passed Sep 12 17:09:50.589585 ignition[1216]: Ignition finished successfully Sep 12 17:09:50.593605 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:09:50.604829 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:09:50.644509 ignition[1223]: Ignition 2.19.0 Sep 12 17:09:50.644529 ignition[1223]: Stage: disks Sep 12 17:09:50.645275 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:50.645299 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:50.645460 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:50.657065 ignition[1223]: PUT result: OK Sep 12 17:09:50.662998 ignition[1223]: disks: disks passed Sep 12 17:09:50.663105 ignition[1223]: Ignition finished successfully Sep 12 17:09:50.666142 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:09:50.671822 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:50.676695 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:09:50.680223 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:50.682470 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:09:50.690994 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:09:50.703706 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:09:50.748831 systemd-fsck[1233]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:09:50.756623 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:09:50.765788 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:09:50.867563 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:09:50.869059 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:09:50.873470 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:09:50.891740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:50.899813 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:09:50.906668 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:09:50.906770 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:09:50.906820 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:50.932589 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1252) Sep 12 17:09:50.936332 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:09:50.947644 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:50.947690 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:50.947717 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:50.954906 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:09:50.966580 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:50.969380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:51.468239 initrd-setup-root[1277]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:09:51.502643 initrd-setup-root[1284]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:09:51.512140 initrd-setup-root[1291]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:09:51.521756 initrd-setup-root[1298]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:09:51.899869 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:51.911902 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:09:51.920128 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:09:51.937163 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:09:51.941158 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:51.986215 ignition[1366]: INFO : Ignition 2.19.0 Sep 12 17:09:51.986215 ignition[1366]: INFO : Stage: mount Sep 12 17:09:51.997450 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:51.997450 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:51.997450 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:52.004311 ignition[1366]: INFO : PUT result: OK Sep 12 17:09:52.003950 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:09:52.006633 ignition[1366]: INFO : mount: mount passed Sep 12 17:09:52.006633 ignition[1366]: INFO : Ignition finished successfully Sep 12 17:09:52.022274 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:09:52.037888 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:09:52.069956 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:52.070906 systemd-networkd[1199]: eth0: Gained IPv6LL Sep 12 17:09:52.105583 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1378) Sep 12 17:09:52.110383 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:52.110458 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:52.110486 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:52.117586 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:52.121450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:52.169669 ignition[1395]: INFO : Ignition 2.19.0 Sep 12 17:09:52.169669 ignition[1395]: INFO : Stage: files Sep 12 17:09:52.169669 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:52.169669 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:52.179415 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:52.179415 ignition[1395]: INFO : PUT result: OK Sep 12 17:09:52.187463 ignition[1395]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:09:52.192681 ignition[1395]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:09:52.192681 ignition[1395]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:09:52.254999 ignition[1395]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:09:52.258578 ignition[1395]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:09:52.262420 unknown[1395]: wrote ssh authorized keys file for user: core Sep 12 17:09:52.266392 ignition[1395]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:09:52.271797 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:09:52.276173 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 17:09:52.367939 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:09:52.630665 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:09:52.634866 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:52.639247 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:52.643077 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:52.646805 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:52.646805 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:52.654324 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:52.654324 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:52.661879 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:52.666717 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:52.666717 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:52.666717 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:52.666717 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:52.666717 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:52.666717 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 17:09:53.058480 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:09:53.467027 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:53.467027 ignition[1395]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:53.475061 ignition[1395]: INFO : files: files passed Sep 12 17:09:53.475061 ignition[1395]: INFO : Ignition finished successfully Sep 12 17:09:53.505600 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:09:53.515838 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:09:53.527997 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:09:53.536701 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:09:53.540709 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:09:53.567195 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:53.567195 initrd-setup-root-after-ignition[1424]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:53.576252 initrd-setup-root-after-ignition[1428]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:53.581440 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:53.586643 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:09:53.598026 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:09:53.655602 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:09:53.657647 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:09:53.663515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:09:53.667808 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:09:53.674622 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:09:53.683943 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:09:53.723934 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:53.732897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:09:53.765187 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:53.765610 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:53.775154 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:09:53.781924 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:09:53.782214 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:53.785668 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:09:53.788246 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:09:53.790693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:09:53.792947 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:53.793202 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:53.793637 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:09:53.793941 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:53.794313 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:09:53.795276 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:09:53.830475 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:09:53.834083 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:09:53.834416 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:53.841605 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:53.844593 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:53.851965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:09:53.852260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:53.855592 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:09:53.856342 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:53.860202 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:09:53.860559 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:53.864010 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:09:53.864331 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:09:53.881709 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:09:53.901718 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:09:53.903743 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:09:53.905217 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:53.914959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:09:53.915201 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:53.934370 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:09:53.938917 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:09:53.948391 ignition[1448]: INFO : Ignition 2.19.0 Sep 12 17:09:53.950693 ignition[1448]: INFO : Stage: umount Sep 12 17:09:53.952604 ignition[1448]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:53.954928 ignition[1448]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:53.954928 ignition[1448]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:53.962268 ignition[1448]: INFO : PUT result: OK Sep 12 17:09:53.972518 ignition[1448]: INFO : umount: umount passed Sep 12 17:09:53.976736 ignition[1448]: INFO : Ignition finished successfully Sep 12 17:09:53.977627 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:09:53.985227 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:09:53.988674 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:09:53.992354 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:09:53.992592 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:09:53.998652 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:09:53.998794 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:09:54.003036 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:09:54.003139 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:09:54.008337 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:09:54.008454 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:09:54.012678 systemd[1]: Stopped target network.target - Network. Sep 12 17:09:54.020904 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:09:54.021034 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:54.023673 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:09:54.025934 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:09:54.030237 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:54.033360 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:09:54.035507 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:09:54.038139 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:09:54.038241 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:54.041217 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:09:54.041314 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:54.045102 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:09:54.045217 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:09:54.049416 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:09:54.049524 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:54.054460 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:09:54.054596 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:54.057401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:09:54.060267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:09:54.096664 systemd-networkd[1199]: eth0: DHCPv6 lease lost Sep 12 17:09:54.099841 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:09:54.100760 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:09:54.111310 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:09:54.111518 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:09:54.123720 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:09:54.123818 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:54.138834 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:09:54.141004 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:09:54.141127 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:54.144140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:09:54.144259 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:54.147192 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:09:54.147305 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:54.150345 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:09:54.150453 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:54.157709 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:54.200466 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:09:54.201943 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:54.210796 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:09:54.210965 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:54.214191 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:09:54.214275 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:54.221477 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:09:54.223791 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:54.232720 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:09:54.232834 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:54.235493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:54.235644 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:54.254949 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:09:54.261059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:09:54.261200 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:54.273015 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:09:54.283041 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:54.286096 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:09:54.286221 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:54.294401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:54.294520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:54.300359 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:09:54.300780 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:09:54.305979 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:09:54.306161 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:09:54.314584 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:09:54.332905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:09:54.355420 systemd[1]: Switching root. Sep 12 17:09:54.415752 systemd-journald[251]: Journal stopped Sep 12 17:09:57.350520 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 12 17:09:57.350690 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:09:57.350735 kernel: SELinux: policy capability open_perms=1 Sep 12 17:09:57.350767 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:09:57.350807 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:09:57.350839 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:09:57.350871 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:09:57.350908 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:09:57.350938 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:09:57.350970 kernel: audit: type=1403 audit(1757696995.088:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:09:57.351010 systemd[1]: Successfully loaded SELinux policy in 87.538ms. Sep 12 17:09:57.351069 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.375ms. Sep 12 17:09:57.351103 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:57.351136 systemd[1]: Detected virtualization amazon. Sep 12 17:09:57.351168 systemd[1]: Detected architecture arm64. Sep 12 17:09:57.351202 systemd[1]: Detected first boot. Sep 12 17:09:57.351235 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:57.351268 zram_generator::config[1490]: No configuration found. Sep 12 17:09:57.351302 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:09:57.351338 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:09:57.351379 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:09:57.351413 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:09:57.351443 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:09:57.351475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:09:57.351510 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:09:57.352558 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:09:57.352603 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:09:57.352638 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:09:57.352669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:09:57.352699 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:09:57.352728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:57.352760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:57.352827 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:09:57.352882 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:09:57.352919 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:09:57.352954 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:57.352984 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:09:57.353016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:57.353048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:09:57.353080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:09:57.353110 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:09:57.353147 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:09:57.353177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:57.353209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:57.353241 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:57.353273 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:57.353303 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:09:57.353333 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:09:57.353365 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:57.353401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:57.353433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:57.354253 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:09:57.355308 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:09:57.355346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:09:57.355376 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:09:57.355405 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:09:57.355437 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:09:57.355469 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:09:57.355508 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:09:57.355555 systemd[1]: Reached target machines.target - Containers. Sep 12 17:09:57.355592 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:09:57.355625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:57.355654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:57.355684 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:09:57.356283 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:57.356318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:09:57.356355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:57.357595 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:09:57.357638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:57.357668 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:09:57.357699 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:09:57.357729 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:09:57.357759 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:09:57.357788 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:09:57.357817 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:57.357861 kernel: loop: module loaded Sep 12 17:09:57.357890 kernel: fuse: init (API version 7.39) Sep 12 17:09:57.357919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:57.357951 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:09:57.357981 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:09:57.358012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:57.358043 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:09:57.358076 systemd[1]: Stopped verity-setup.service. Sep 12 17:09:57.358105 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:09:57.358138 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:09:57.358167 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:09:57.358196 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:09:57.358225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:09:57.358257 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:09:57.358288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:57.358321 kernel: ACPI: bus type drm_connector registered Sep 12 17:09:57.358349 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:09:57.358378 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:09:57.358407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:57.358436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:57.358466 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:09:57.358498 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:09:57.358531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:57.359601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:57.359633 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:09:57.359663 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:09:57.359752 systemd-journald[1568]: Collecting audit messages is disabled. Sep 12 17:09:57.359814 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:57.359846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:57.359875 systemd-journald[1568]: Journal started Sep 12 17:09:57.359922 systemd-journald[1568]: Runtime Journal (/run/log/journal/ec29fa6e28bbbf11bced4ff938a382ee) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:09:56.665842 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:09:56.765498 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:09:56.766399 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:09:57.366906 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:57.369526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:57.375717 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:09:57.382423 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:09:57.391653 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:09:57.422199 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:09:57.435941 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:09:57.452529 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:09:57.456799 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:09:57.456913 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:57.465071 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:09:57.476858 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:09:57.481675 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:09:57.489297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:57.509870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:09:57.519861 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:09:57.522700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:09:57.531839 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:09:57.534360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:09:57.537775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:57.554989 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:09:57.565966 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:57.574328 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:09:57.577238 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:09:57.583237 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:09:57.598972 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:09:57.611967 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:09:57.620952 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:09:57.678616 kernel: loop0: detected capacity change from 0 to 52536 Sep 12 17:09:57.696925 systemd-journald[1568]: Time spent on flushing to /var/log/journal/ec29fa6e28bbbf11bced4ff938a382ee is 121.911ms for 914 entries. Sep 12 17:09:57.696925 systemd-journald[1568]: System Journal (/var/log/journal/ec29fa6e28bbbf11bced4ff938a382ee) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:09:57.841755 systemd-journald[1568]: Received client request to flush runtime journal. Sep 12 17:09:57.841830 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:09:57.708809 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:57.727107 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:09:57.731565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:57.749449 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:09:57.758697 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:09:57.821327 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:09:57.824438 systemd-tmpfiles[1620]: ACLs are not supported, ignoring. Sep 12 17:09:57.824463 systemd-tmpfiles[1620]: ACLs are not supported, ignoring. Sep 12 17:09:57.848462 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:09:57.855652 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:57.869050 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:09:57.876081 kernel: loop1: detected capacity change from 0 to 114328 Sep 12 17:09:57.958476 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:09:57.971028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:57.996617 kernel: loop2: detected capacity change from 0 to 207008 Sep 12 17:09:58.037350 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Sep 12 17:09:58.037917 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Sep 12 17:09:58.049366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:58.172579 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 17:09:58.294564 kernel: loop4: detected capacity change from 0 to 52536 Sep 12 17:09:58.318910 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 17:09:58.337578 kernel: loop6: detected capacity change from 0 to 207008 Sep 12 17:09:58.371615 kernel: loop7: detected capacity change from 0 to 114432 Sep 12 17:09:58.384608 (sd-merge)[1647]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:09:58.386118 (sd-merge)[1647]: Merged extensions into '/usr'. Sep 12 17:09:58.394255 systemd[1]: Reloading requested from client PID 1619 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:09:58.394610 systemd[1]: Reloading... Sep 12 17:09:58.530596 zram_generator::config[1670]: No configuration found. Sep 12 17:09:58.987204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:09:59.115168 systemd[1]: Reloading finished in 719 ms. Sep 12 17:09:59.158666 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:09:59.162465 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:09:59.177964 systemd[1]: Starting ensure-sysext.service... Sep 12 17:09:59.187921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:09:59.204035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:59.221321 ldconfig[1614]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:09:59.222876 systemd[1]: Reloading requested from client PID 1725 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:09:59.222913 systemd[1]: Reloading... Sep 12 17:09:59.298225 systemd-udevd[1727]: Using default interface naming scheme 'v255'. Sep 12 17:09:59.307979 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:09:59.309806 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:09:59.316173 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:09:59.316862 systemd-tmpfiles[1726]: ACLs are not supported, ignoring. Sep 12 17:09:59.317062 systemd-tmpfiles[1726]: ACLs are not supported, ignoring. Sep 12 17:09:59.333458 systemd-tmpfiles[1726]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:09:59.333492 systemd-tmpfiles[1726]: Skipping /boot Sep 12 17:09:59.387186 systemd-tmpfiles[1726]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:09:59.387223 systemd-tmpfiles[1726]: Skipping /boot Sep 12 17:09:59.448592 zram_generator::config[1749]: No configuration found. Sep 12 17:09:59.670697 (udev-worker)[1760]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:09:59.915064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:09:59.991644 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1767) Sep 12 17:10:00.105378 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:10:00.107946 systemd[1]: Reloading finished in 884 ms. Sep 12 17:10:00.140995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:10:00.145132 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:10:00.163401 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:10:00.307075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:10:00.313415 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:10:00.335130 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:00.354134 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:10:00.359084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:10:00.369116 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:10:00.376340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:10:00.381318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:10:00.390865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:10:00.398003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:10:00.401426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:10:00.405224 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:10:00.421163 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:10:00.431160 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:10:00.441192 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:10:00.455719 lvm[1930]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:10:00.444978 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:10:00.461174 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:10:00.470110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:10:00.493798 systemd[1]: Finished ensure-sysext.service. Sep 12 17:10:00.528932 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:10:00.533243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:10:00.549688 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:10:00.565136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:10:00.566701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:10:00.571485 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:10:00.606351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:10:00.611812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:10:00.623373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:10:00.644201 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:10:00.650457 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:10:00.651976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:10:00.656382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:10:00.679134 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:10:00.683179 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:10:00.683866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:10:00.692372 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:10:00.700519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:10:00.709464 lvm[1946]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:10:00.728391 augenrules[1963]: No rules Sep 12 17:10:00.734385 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:00.755134 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:10:00.770839 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:10:00.788178 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:10:00.813324 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:10:00.822776 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:10:00.871689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:10:00.953487 systemd-networkd[1937]: lo: Link UP Sep 12 17:10:00.954069 systemd-networkd[1937]: lo: Gained carrier Sep 12 17:10:00.957623 systemd-networkd[1937]: Enumeration completed Sep 12 17:10:00.957837 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:10:00.959217 systemd-networkd[1937]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:00.959226 systemd-networkd[1937]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:10:00.960207 systemd-resolved[1938]: Positive Trust Anchors: Sep 12 17:10:00.960231 systemd-resolved[1938]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:10:00.960294 systemd-resolved[1938]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:10:00.969427 systemd-networkd[1937]: eth0: Link UP Sep 12 17:10:00.969792 systemd-networkd[1937]: eth0: Gained carrier Sep 12 17:10:00.969826 systemd-networkd[1937]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:00.975955 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:10:00.977076 systemd-resolved[1938]: Defaulting to hostname 'linux'. Sep 12 17:10:00.982691 systemd-networkd[1937]: eth0: DHCPv4 address 172.31.22.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:10:00.983167 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:10:00.985856 systemd[1]: Reached target network.target - Network. Sep 12 17:10:00.988008 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:10:00.990731 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:10:00.993571 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:10:00.996995 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:10:01.000662 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:10:01.003516 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:10:01.006511 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:10:01.009500 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:10:01.009830 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:10:01.012126 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:10:01.015944 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:10:01.023932 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:10:01.044803 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:10:01.048468 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:10:01.051631 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:10:01.053901 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:10:01.056367 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:10:01.056433 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:10:01.065988 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:10:01.076154 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:10:01.083902 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:10:01.093110 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:10:01.104291 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:10:01.107934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:10:01.111835 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:10:01.123947 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:10:01.134831 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:10:01.143191 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:10:01.152466 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:10:01.165830 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:10:01.181033 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:10:01.188489 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:10:01.191017 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:10:01.192896 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:10:01.225115 jq[1989]: false Sep 12 17:10:01.214928 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:10:01.229982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:10:01.230433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:10:01.273707 jq[2002]: true Sep 12 17:10:01.344132 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:10:01.350050 dbus-daemon[1988]: [system] SELinux support is enabled Sep 12 17:10:01.344606 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:10:01.350515 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:10:01.353107 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:10:01.353627 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:10:01.365485 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:10:01.365597 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:10:01.365853 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:10:01.365894 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:10:01.397384 extend-filesystems[1990]: Found loop4 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found loop5 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found loop6 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found loop7 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1p1 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1p2 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1p3 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found usr Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1p4 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1p6 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1p7 Sep 12 17:10:01.397384 extend-filesystems[1990]: Found nvme0n1p9 Sep 12 17:10:01.397384 extend-filesystems[1990]: Checking size of /dev/nvme0n1p9 Sep 12 17:10:01.401840 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: ---------------------------------------------------- Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: corporation. Support and training for ntp-4 are Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: available at https://www.nwtime.org/support Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: ---------------------------------------------------- Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: proto: precision = 0.096 usec (-23) Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: basedate set to 2025-08-31 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: gps base set to 2025-08-31 (week 2382) Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Listen normally on 3 eth0 172.31.22.10:123 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Listen normally on 4 lo [::1]:123 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: bind(21) AF_INET6 fe80::440:fbff:fec2:99b1%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: unable to create socket on eth0 (5) for fe80::440:fbff:fec2:99b1%2#123 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: failed to init interface for address fe80::440:fbff:fec2:99b1%2 Sep 12 17:10:01.508106 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Sep 12 17:10:01.509347 jq[2013]: true Sep 12 17:10:01.491838 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:10:01.515586 tar[2012]: linux-arm64/LICENSE Sep 12 17:10:01.515586 tar[2012]: linux-arm64/helm Sep 12 17:10:01.401896 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:10:01.401918 ntpd[1992]: ---------------------------------------------------- Sep 12 17:10:01.401938 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:10:01.401958 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:10:01.401976 ntpd[1992]: corporation. Support and training for ntp-4 are Sep 12 17:10:01.401995 ntpd[1992]: available at https://www.nwtime.org/support Sep 12 17:10:01.402015 ntpd[1992]: ---------------------------------------------------- Sep 12 17:10:01.409614 ntpd[1992]: proto: precision = 0.096 usec (-23) Sep 12 17:10:01.411316 ntpd[1992]: basedate set to 2025-08-31 Sep 12 17:10:01.411349 ntpd[1992]: gps base set to 2025-08-31 (week 2382) Sep 12 17:10:01.414230 dbus-daemon[1988]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1937 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:10:01.443017 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:10:01.522506 extend-filesystems[1990]: Resized partition /dev/nvme0n1p9 Sep 12 17:10:01.443119 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:10:01.443474 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:10:01.443603 ntpd[1992]: Listen normally on 3 eth0 172.31.22.10:123 Sep 12 17:10:01.443688 ntpd[1992]: Listen normally on 4 lo [::1]:123 Sep 12 17:10:01.443782 ntpd[1992]: bind(21) AF_INET6 fe80::440:fbff:fec2:99b1%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:10:01.443828 ntpd[1992]: unable to create socket on eth0 (5) for fe80::440:fbff:fec2:99b1%2#123 Sep 12 17:10:01.443857 ntpd[1992]: failed to init interface for address fe80::440:fbff:fec2:99b1%2 Sep 12 17:10:01.443926 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Sep 12 17:10:01.534488 extend-filesystems[2041]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:10:01.565274 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:10:01.565376 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:01.565376 ntpd[1992]: 12 Sep 17:10:01 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:01.551923 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:01.542364 (ntainerd)[2023]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:10:01.551977 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:01.634480 update_engine[2000]: I20250912 17:10:01.626271 2000 main.cc:92] Flatcar Update Engine starting Sep 12 17:10:01.649622 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:10:01.654622 update_engine[2000]: I20250912 17:10:01.651415 2000 update_check_scheduler.cc:74] Next update check in 4m49s Sep 12 17:10:01.658639 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:10:01.686722 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:10:01.678899 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:10:01.689596 extend-filesystems[2041]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:10:01.689596 extend-filesystems[2041]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:10:01.689596 extend-filesystems[2041]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:10:01.713459 extend-filesystems[1990]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:10:01.721308 bash[2054]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:10:01.722379 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:10:01.722815 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:10:01.729126 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:10:01.744259 systemd[1]: Starting sshkeys.service... Sep 12 17:10:01.801911 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:10:01.860433 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:10:01.874454 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:10:01.918801 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1759) Sep 12 17:10:01.922686 systemd-logind[1999]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:10:01.927708 systemd-logind[1999]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 17:10:01.932163 systemd-logind[1999]: New seat seat0. Sep 12 17:10:01.950896 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:10:01.966739 coreos-metadata[1987]: Sep 12 17:10:01.965 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:10:01.969422 coreos-metadata[1987]: Sep 12 17:10:01.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:10:01.975044 coreos-metadata[1987]: Sep 12 17:10:01.974 INFO Fetch successful Sep 12 17:10:01.975044 coreos-metadata[1987]: Sep 12 17:10:01.974 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:10:01.976648 coreos-metadata[1987]: Sep 12 17:10:01.976 INFO Fetch successful Sep 12 17:10:01.976648 coreos-metadata[1987]: Sep 12 17:10:01.976 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:10:01.977595 coreos-metadata[1987]: Sep 12 17:10:01.977 INFO Fetch successful Sep 12 17:10:01.977595 coreos-metadata[1987]: Sep 12 17:10:01.977 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:10:01.980272 coreos-metadata[1987]: Sep 12 17:10:01.979 INFO Fetch successful Sep 12 17:10:01.980272 coreos-metadata[1987]: Sep 12 17:10:01.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:10:01.981619 coreos-metadata[1987]: Sep 12 17:10:01.981 INFO Fetch failed with 404: resource not found Sep 12 17:10:01.981619 coreos-metadata[1987]: Sep 12 17:10:01.981 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:10:01.985254 coreos-metadata[1987]: Sep 12 17:10:01.984 INFO Fetch successful Sep 12 17:10:01.985254 coreos-metadata[1987]: Sep 12 17:10:01.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:10:01.985254 coreos-metadata[1987]: Sep 12 17:10:01.984 INFO Fetch successful Sep 12 17:10:01.985254 coreos-metadata[1987]: Sep 12 17:10:01.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:10:01.985254 coreos-metadata[1987]: Sep 12 17:10:01.984 INFO Fetch successful Sep 12 17:10:01.985254 coreos-metadata[1987]: Sep 12 17:10:01.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:10:01.987851 coreos-metadata[1987]: Sep 12 17:10:01.986 INFO Fetch successful Sep 12 17:10:01.987851 coreos-metadata[1987]: Sep 12 17:10:01.986 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:10:01.987851 coreos-metadata[1987]: Sep 12 17:10:01.986 INFO Fetch successful Sep 12 17:10:02.048439 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:10:02.055288 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:10:02.181918 systemd-networkd[1937]: eth0: Gained IPv6LL Sep 12 17:10:02.203238 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:10:02.210846 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:10:02.226333 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:10:02.239229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:02.367898 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:10:02.373799 dbus-daemon[1988]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:10:02.380279 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:10:02.389032 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:10:02.408207 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:10:02.446595 containerd[2023]: time="2025-09-12T17:10:02.445790568Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:10:02.475795 coreos-metadata[2069]: Sep 12 17:10:02.474 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:10:02.484977 coreos-metadata[2069]: Sep 12 17:10:02.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:10:02.484977 coreos-metadata[2069]: Sep 12 17:10:02.484 INFO Fetch successful Sep 12 17:10:02.484977 coreos-metadata[2069]: Sep 12 17:10:02.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:10:02.490764 coreos-metadata[2069]: Sep 12 17:10:02.490 INFO Fetch successful Sep 12 17:10:02.525681 unknown[2069]: wrote ssh authorized keys file for user: core Sep 12 17:10:02.604746 polkitd[2138]: Started polkitd version 121 Sep 12 17:10:02.654907 polkitd[2138]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:10:02.655326 polkitd[2138]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:10:02.670588 update-ssh-keys[2158]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:10:02.668386 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:10:02.664757 polkitd[2138]: Finished loading, compiling and executing 2 rules Sep 12 17:10:02.667780 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:10:02.680060 polkitd[2138]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:10:02.677518 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:10:02.687692 systemd[1]: Finished sshkeys.service. Sep 12 17:10:02.734735 locksmithd[2057]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:10:02.737621 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:10:02.746520 amazon-ssm-agent[2121]: Initializing new seelog logger Sep 12 17:10:02.748572 amazon-ssm-agent[2121]: New Seelog Logger Creation Complete Sep 12 17:10:02.748572 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.748572 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.752003 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 processing appconfig overrides Sep 12 17:10:02.752888 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.755760 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.755760 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 processing appconfig overrides Sep 12 17:10:02.755760 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.755760 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.755760 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 processing appconfig overrides Sep 12 17:10:02.758501 amazon-ssm-agent[2121]: 2025-09-12 17:10:02 INFO Proxy environment variables: Sep 12 17:10:02.759643 systemd-hostnamed[2032]: Hostname set to (transient) Sep 12 17:10:02.760388 systemd-resolved[1938]: System hostname changed to 'ip-172-31-22-10'. Sep 12 17:10:02.782581 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.782581 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:02.782581 amazon-ssm-agent[2121]: 2025/09/12 17:10:02 processing appconfig overrides Sep 12 17:10:02.788411 containerd[2023]: time="2025-09-12T17:10:02.788333293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:02.808394 containerd[2023]: time="2025-09-12T17:10:02.808311745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.810611917Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.810687385Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811033597Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811075249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811209109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811241401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811596721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811636333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811669309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811694689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.811921321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:02.813000 containerd[2023]: time="2025-09-12T17:10:02.812387173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:02.821014 containerd[2023]: time="2025-09-12T17:10:02.820103966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:02.821014 containerd[2023]: time="2025-09-12T17:10:02.820167194Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:10:02.821014 containerd[2023]: time="2025-09-12T17:10:02.820431626Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:10:02.821014 containerd[2023]: time="2025-09-12T17:10:02.820670246Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:10:02.849571 containerd[2023]: time="2025-09-12T17:10:02.845505554Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:10:02.849571 containerd[2023]: time="2025-09-12T17:10:02.845660078Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:10:02.849571 containerd[2023]: time="2025-09-12T17:10:02.845700122Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:10:02.849571 containerd[2023]: time="2025-09-12T17:10:02.845738882Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:10:02.849571 containerd[2023]: time="2025-09-12T17:10:02.845772230Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:10:02.849571 containerd[2023]: time="2025-09-12T17:10:02.846088862Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:10:02.853843 containerd[2023]: time="2025-09-12T17:10:02.846524846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:10:02.855196 containerd[2023]: time="2025-09-12T17:10:02.854853542Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:10:02.863339 amazon-ssm-agent[2121]: 2025-09-12 17:10:02 INFO no_proxy: Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866119154Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866195702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866241482Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866278082Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866311994Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866346518Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866380022Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866425046Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866462114Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.867680 containerd[2023]: time="2025-09-12T17:10:02.866491670Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871629386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871706246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871742642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871791698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871824650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871872794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871905230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871937054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.871968290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.872004878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.872037434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.872072438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.872103518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.872149754Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:10:02.873890 containerd[2023]: time="2025-09-12T17:10:02.872201594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.874688 containerd[2023]: time="2025-09-12T17:10:02.872232446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.874688 containerd[2023]: time="2025-09-12T17:10:02.872262386Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.872510630Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.879637682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.879682550Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.879714542Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.879740654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.879776378Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.879801794Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:10:02.882568 containerd[2023]: time="2025-09-12T17:10:02.879827570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:10:02.883010 containerd[2023]: time="2025-09-12T17:10:02.881085146Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:10:02.883010 containerd[2023]: time="2025-09-12T17:10:02.881247686Z" level=info msg="Connect containerd service" Sep 12 17:10:02.883010 containerd[2023]: time="2025-09-12T17:10:02.881347946Z" level=info msg="using legacy CRI server" Sep 12 17:10:02.883010 containerd[2023]: time="2025-09-12T17:10:02.881373074Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:10:02.883010 containerd[2023]: time="2025-09-12T17:10:02.881648438Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:10:02.888458 containerd[2023]: time="2025-09-12T17:10:02.888304934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:10:02.891573 containerd[2023]: time="2025-09-12T17:10:02.889459058Z" level=info msg="Start subscribing containerd event" Sep 12 17:10:02.891573 containerd[2023]: time="2025-09-12T17:10:02.889585358Z" level=info msg="Start recovering state" Sep 12 17:10:02.891573 containerd[2023]: time="2025-09-12T17:10:02.889770002Z" level=info msg="Start event monitor" Sep 12 17:10:02.891573 containerd[2023]: time="2025-09-12T17:10:02.889800278Z" level=info msg="Start snapshots syncer" Sep 12 17:10:02.891573 containerd[2023]: time="2025-09-12T17:10:02.889825538Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:10:02.891573 containerd[2023]: time="2025-09-12T17:10:02.889861598Z" level=info msg="Start streaming server" Sep 12 17:10:02.910591 containerd[2023]: time="2025-09-12T17:10:02.909933590Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:10:02.910591 containerd[2023]: time="2025-09-12T17:10:02.910103978Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:10:02.910404 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:10:02.911493 containerd[2023]: time="2025-09-12T17:10:02.910957730Z" level=info msg="containerd successfully booted in 0.467433s" Sep 12 17:10:02.964912 amazon-ssm-agent[2121]: 2025-09-12 17:10:02 INFO https_proxy: Sep 12 17:10:03.062911 amazon-ssm-agent[2121]: 2025-09-12 17:10:02 INFO http_proxy: Sep 12 17:10:03.162741 amazon-ssm-agent[2121]: 2025-09-12 17:10:02 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:10:03.261740 amazon-ssm-agent[2121]: 2025-09-12 17:10:02 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:10:03.362160 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO Agent will take identity from EC2 Sep 12 17:10:03.462224 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:03.561527 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:03.660880 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:03.760057 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:10:03.860525 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 17:10:03.953529 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:10:03.953529 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:10:03.953529 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [Registrar] Starting registrar module Sep 12 17:10:03.953529 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:10:03.953974 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:10:03.953974 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:10:03.953974 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:10:03.953974 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:10:03.960081 amazon-ssm-agent[2121]: 2025-09-12 17:10:03 INFO [CredentialRefresher] Next credential rotation will be in 30.866651680466667 minutes Sep 12 17:10:04.027829 tar[2012]: linux-arm64/README.md Sep 12 17:10:04.072703 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:10:04.194523 sshd_keygen[2034]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:10:04.238404 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:10:04.255184 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:10:04.268743 systemd[1]: Started sshd@0-172.31.22.10:22-147.75.109.163:34590.service - OpenSSH per-connection server daemon (147.75.109.163:34590). Sep 12 17:10:04.291642 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:10:04.292364 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:10:04.308119 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:10:04.345658 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:10:04.365231 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:10:04.377497 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:10:04.384411 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:10:04.403051 ntpd[1992]: Listen normally on 6 eth0 [fe80::440:fbff:fec2:99b1%2]:123 Sep 12 17:10:04.403645 ntpd[1992]: 12 Sep 17:10:04 ntpd[1992]: Listen normally on 6 eth0 [fe80::440:fbff:fec2:99b1%2]:123 Sep 12 17:10:04.490473 sshd[2226]: Accepted publickey for core from 147.75.109.163 port 34590 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:04.494612 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:04.513263 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:10:04.525191 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:10:04.536398 systemd-logind[1999]: New session 1 of user core. Sep 12 17:10:04.567837 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:10:04.586101 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:10:04.602881 (systemd)[2237]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:10:04.852632 systemd[2237]: Queued start job for default target default.target. Sep 12 17:10:04.861247 systemd[2237]: Created slice app.slice - User Application Slice. Sep 12 17:10:04.861331 systemd[2237]: Reached target paths.target - Paths. Sep 12 17:10:04.861366 systemd[2237]: Reached target timers.target - Timers. Sep 12 17:10:04.864288 systemd[2237]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:10:04.900437 systemd[2237]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:10:04.900779 systemd[2237]: Reached target sockets.target - Sockets. Sep 12 17:10:04.900836 systemd[2237]: Reached target basic.target - Basic System. Sep 12 17:10:04.900940 systemd[2237]: Reached target default.target - Main User Target. Sep 12 17:10:04.901029 systemd[2237]: Startup finished in 283ms. Sep 12 17:10:04.902019 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:10:04.910905 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:10:04.988659 amazon-ssm-agent[2121]: 2025-09-12 17:10:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:10:05.085875 systemd[1]: Started sshd@1-172.31.22.10:22-147.75.109.163:34606.service - OpenSSH per-connection server daemon (147.75.109.163:34606). Sep 12 17:10:05.096051 amazon-ssm-agent[2121]: 2025-09-12 17:10:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2247) started Sep 12 17:10:05.190814 amazon-ssm-agent[2121]: 2025-09-12 17:10:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:10:05.303570 sshd[2253]: Accepted publickey for core from 147.75.109.163 port 34606 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:05.308938 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:05.319571 systemd-logind[1999]: New session 2 of user core. Sep 12 17:10:05.327572 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:10:05.433882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:05.442816 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:10:05.445633 systemd[1]: Startup finished in 1.211s (kernel) + 9.239s (initrd) + 10.444s (userspace) = 20.895s. Sep 12 17:10:05.463402 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:05.473890 sshd[2253]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:05.484577 systemd[1]: sshd@1-172.31.22.10:22-147.75.109.163:34606.service: Deactivated successfully. Sep 12 17:10:05.489748 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:10:05.493419 systemd-logind[1999]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:10:05.518281 systemd[1]: Started sshd@2-172.31.22.10:22-147.75.109.163:34616.service - OpenSSH per-connection server daemon (147.75.109.163:34616). Sep 12 17:10:05.520819 systemd-logind[1999]: Removed session 2. Sep 12 17:10:05.697317 sshd[2276]: Accepted publickey for core from 147.75.109.163 port 34616 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:05.700350 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:05.710892 systemd-logind[1999]: New session 3 of user core. Sep 12 17:10:05.724891 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:10:05.849859 sshd[2276]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:05.859493 systemd-logind[1999]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:10:05.860373 systemd[1]: sshd@2-172.31.22.10:22-147.75.109.163:34616.service: Deactivated successfully. Sep 12 17:10:05.865271 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:10:05.867775 systemd-logind[1999]: Removed session 3. Sep 12 17:10:05.891080 systemd[1]: Started sshd@3-172.31.22.10:22-147.75.109.163:34628.service - OpenSSH per-connection server daemon (147.75.109.163:34628). Sep 12 17:10:06.075481 sshd[2287]: Accepted publickey for core from 147.75.109.163 port 34628 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:06.079062 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:06.090454 systemd-logind[1999]: New session 4 of user core. Sep 12 17:10:06.093897 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:10:06.225146 sshd[2287]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:06.234214 systemd[1]: sshd@3-172.31.22.10:22-147.75.109.163:34628.service: Deactivated successfully. Sep 12 17:10:06.238904 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:10:06.242497 systemd-logind[1999]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:10:06.244741 systemd-logind[1999]: Removed session 4. Sep 12 17:10:06.273060 systemd[1]: Started sshd@4-172.31.22.10:22-147.75.109.163:34640.service - OpenSSH per-connection server daemon (147.75.109.163:34640). Sep 12 17:10:06.442688 sshd[2294]: Accepted publickey for core from 147.75.109.163 port 34640 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:06.444450 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:06.456494 systemd-logind[1999]: New session 5 of user core. Sep 12 17:10:06.464934 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:10:06.615265 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:10:06.616827 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:06.639392 sudo[2297]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:06.664274 sshd[2294]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:06.672160 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:10:06.672847 systemd-logind[1999]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:10:06.675444 systemd[1]: sshd@4-172.31.22.10:22-147.75.109.163:34640.service: Deactivated successfully. Sep 12 17:10:06.683890 systemd-logind[1999]: Removed session 5. Sep 12 17:10:06.707852 systemd[1]: Started sshd@5-172.31.22.10:22-147.75.109.163:34642.service - OpenSSH per-connection server daemon (147.75.109.163:34642). Sep 12 17:10:06.818493 kubelet[2267]: E0912 17:10:06.818431 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:06.822998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:06.823356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:06.826670 systemd[1]: kubelet.service: Consumed 1.467s CPU time. Sep 12 17:10:06.876181 sshd[2302]: Accepted publickey for core from 147.75.109.163 port 34642 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:06.878893 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:06.887610 systemd-logind[1999]: New session 6 of user core. Sep 12 17:10:06.894777 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:10:06.998101 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:10:06.999312 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:07.005775 sudo[2308]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:07.015985 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:10:07.016618 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:07.038107 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:07.053374 auditctl[2311]: No rules Sep 12 17:10:07.054173 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:10:07.054525 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:07.069361 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:07.121300 augenrules[2329]: No rules Sep 12 17:10:07.123969 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:07.128047 sudo[2307]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:07.150975 sshd[2302]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:07.156090 systemd[1]: sshd@5-172.31.22.10:22-147.75.109.163:34642.service: Deactivated successfully. Sep 12 17:10:07.158918 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:10:07.163619 systemd-logind[1999]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:10:07.165463 systemd-logind[1999]: Removed session 6. Sep 12 17:10:07.193131 systemd[1]: Started sshd@6-172.31.22.10:22-147.75.109.163:34648.service - OpenSSH per-connection server daemon (147.75.109.163:34648). Sep 12 17:10:07.363333 sshd[2337]: Accepted publickey for core from 147.75.109.163 port 34648 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:07.366044 sshd[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:07.373743 systemd-logind[1999]: New session 7 of user core. Sep 12 17:10:07.381856 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:10:07.486231 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:10:07.487614 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:08.345087 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:10:08.348256 (dockerd)[2357]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:10:08.589059 systemd-resolved[1938]: Clock change detected. Flushing caches. Sep 12 17:10:09.082050 dockerd[2357]: time="2025-09-12T17:10:09.081936430Z" level=info msg="Starting up" Sep 12 17:10:09.280826 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport601082831-merged.mount: Deactivated successfully. Sep 12 17:10:09.394503 systemd[1]: var-lib-docker-metacopy\x2dcheck3692768689-merged.mount: Deactivated successfully. Sep 12 17:10:09.408311 dockerd[2357]: time="2025-09-12T17:10:09.407935944Z" level=info msg="Loading containers: start." Sep 12 17:10:09.587116 kernel: Initializing XFRM netlink socket Sep 12 17:10:09.626623 (udev-worker)[2381]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:10:09.729563 systemd-networkd[1937]: docker0: Link UP Sep 12 17:10:09.753429 dockerd[2357]: time="2025-09-12T17:10:09.753374005Z" level=info msg="Loading containers: done." Sep 12 17:10:09.788356 dockerd[2357]: time="2025-09-12T17:10:09.788258101Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:10:09.788665 dockerd[2357]: time="2025-09-12T17:10:09.788470513Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:10:09.788744 dockerd[2357]: time="2025-09-12T17:10:09.788714497Z" level=info msg="Daemon has completed initialization" Sep 12 17:10:09.848197 dockerd[2357]: time="2025-09-12T17:10:09.848124686Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:10:09.848268 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:10:10.275890 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1200873104-merged.mount: Deactivated successfully. Sep 12 17:10:11.805320 containerd[2023]: time="2025-09-12T17:10:11.805258395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:10:12.418301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320199655.mount: Deactivated successfully. Sep 12 17:10:13.884053 containerd[2023]: time="2025-09-12T17:10:13.883967118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:13.886490 containerd[2023]: time="2025-09-12T17:10:13.886161366Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 12 17:10:13.887728 containerd[2023]: time="2025-09-12T17:10:13.887649534Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:13.895470 containerd[2023]: time="2025-09-12T17:10:13.893410602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:13.896056 containerd[2023]: time="2025-09-12T17:10:13.896007906Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.090680703s" Sep 12 17:10:13.896200 containerd[2023]: time="2025-09-12T17:10:13.896168970Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 17:10:13.897341 containerd[2023]: time="2025-09-12T17:10:13.897298278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:10:15.482507 containerd[2023]: time="2025-09-12T17:10:15.480757542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:15.483883 containerd[2023]: time="2025-09-12T17:10:15.483825366Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 12 17:10:15.485171 containerd[2023]: time="2025-09-12T17:10:15.485108394Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:15.493820 containerd[2023]: time="2025-09-12T17:10:15.493729530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:15.496895 containerd[2023]: time="2025-09-12T17:10:15.496822590Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.59909968s" Sep 12 17:10:15.497129 containerd[2023]: time="2025-09-12T17:10:15.497088102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 17:10:15.498282 containerd[2023]: time="2025-09-12T17:10:15.498191538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:10:16.825896 containerd[2023]: time="2025-09-12T17:10:16.825799772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:16.828054 containerd[2023]: time="2025-09-12T17:10:16.827981192Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 12 17:10:16.830884 containerd[2023]: time="2025-09-12T17:10:16.830805968Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:16.837749 containerd[2023]: time="2025-09-12T17:10:16.837641564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:16.843096 containerd[2023]: time="2025-09-12T17:10:16.842989676Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.344441894s" Sep 12 17:10:16.843096 containerd[2023]: time="2025-09-12T17:10:16.843073856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 17:10:16.844410 containerd[2023]: time="2025-09-12T17:10:16.844285893Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:10:17.259730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:10:17.275846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:17.771887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:17.778083 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:17.865004 kubelet[2576]: E0912 17:10:17.864943 2576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:17.872770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:17.873262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:18.427714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226161698.mount: Deactivated successfully. Sep 12 17:10:18.970941 containerd[2023]: time="2025-09-12T17:10:18.970875035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:18.973308 containerd[2023]: time="2025-09-12T17:10:18.973219031Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 12 17:10:18.974910 containerd[2023]: time="2025-09-12T17:10:18.974854643Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:18.979849 containerd[2023]: time="2025-09-12T17:10:18.979754783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:18.981600 containerd[2023]: time="2025-09-12T17:10:18.981522119Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 2.137102942s" Sep 12 17:10:18.981600 containerd[2023]: time="2025-09-12T17:10:18.981584987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 17:10:18.983758 containerd[2023]: time="2025-09-12T17:10:18.983703263Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:10:19.501638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228961160.mount: Deactivated successfully. Sep 12 17:10:20.616470 containerd[2023]: time="2025-09-12T17:10:20.615137951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:20.617347 containerd[2023]: time="2025-09-12T17:10:20.617283467Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 17:10:20.618926 containerd[2023]: time="2025-09-12T17:10:20.618876119Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:20.625096 containerd[2023]: time="2025-09-12T17:10:20.625044467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:20.627688 containerd[2023]: time="2025-09-12T17:10:20.627631199Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.64386148s" Sep 12 17:10:20.627860 containerd[2023]: time="2025-09-12T17:10:20.627828923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:10:20.629658 containerd[2023]: time="2025-09-12T17:10:20.629617331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:10:21.082788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571639626.mount: Deactivated successfully. Sep 12 17:10:21.093246 containerd[2023]: time="2025-09-12T17:10:21.093168586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:21.094807 containerd[2023]: time="2025-09-12T17:10:21.094740226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:10:21.097478 containerd[2023]: time="2025-09-12T17:10:21.096386746Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:21.101623 containerd[2023]: time="2025-09-12T17:10:21.101560114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:21.103428 containerd[2023]: time="2025-09-12T17:10:21.103366918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 473.552439ms" Sep 12 17:10:21.103594 containerd[2023]: time="2025-09-12T17:10:21.103424674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:10:21.104282 containerd[2023]: time="2025-09-12T17:10:21.104224534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:10:21.750431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906347547.mount: Deactivated successfully. Sep 12 17:10:24.237274 containerd[2023]: time="2025-09-12T17:10:24.237195541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:24.289568 containerd[2023]: time="2025-09-12T17:10:24.289496917Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 12 17:10:24.314480 containerd[2023]: time="2025-09-12T17:10:24.313049042Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:24.323097 containerd[2023]: time="2025-09-12T17:10:24.323010326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:24.325941 containerd[2023]: time="2025-09-12T17:10:24.325887902Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.221605384s" Sep 12 17:10:24.326116 containerd[2023]: time="2025-09-12T17:10:24.326085278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 17:10:28.123724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:10:28.135616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:28.509879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:28.522552 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:28.600490 kubelet[2725]: E0912 17:10:28.600043 2725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:28.603430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:28.603788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:32.628974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:32.643315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:32.705659 systemd[1]: Reloading requested from client PID 2740 ('systemctl') (unit session-7.scope)... Sep 12 17:10:32.705692 systemd[1]: Reloading... Sep 12 17:10:32.951480 zram_generator::config[2787]: No configuration found. Sep 12 17:10:33.194328 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:33.370537 systemd[1]: Reloading finished in 664 ms. Sep 12 17:10:33.432037 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:10:33.477715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:33.484400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:33.489208 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:10:33.489734 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:33.496220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:33.836317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:33.854260 (kubelet)[2850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:33.930484 kubelet[2850]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:33.930484 kubelet[2850]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:33.930484 kubelet[2850]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:33.930484 kubelet[2850]: I0912 17:10:33.929682 2850 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:35.576107 kubelet[2850]: I0912 17:10:35.576027 2850 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:10:35.576107 kubelet[2850]: I0912 17:10:35.576084 2850 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:35.577497 kubelet[2850]: I0912 17:10:35.576900 2850 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:10:35.635384 kubelet[2850]: E0912 17:10:35.635318 2850 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:35.640148 kubelet[2850]: I0912 17:10:35.640080 2850 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:35.650511 kubelet[2850]: E0912 17:10:35.649991 2850 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:35.650511 kubelet[2850]: I0912 17:10:35.650048 2850 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:35.655368 kubelet[2850]: I0912 17:10:35.655306 2850 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:35.656932 kubelet[2850]: I0912 17:10:35.656856 2850 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:35.657222 kubelet[2850]: I0912 17:10:35.656922 2850 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:10:35.657512 kubelet[2850]: I0912 17:10:35.657365 2850 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:35.657512 kubelet[2850]: I0912 17:10:35.657385 2850 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:10:35.657796 kubelet[2850]: I0912 17:10:35.657753 2850 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:35.666989 kubelet[2850]: I0912 17:10:35.666385 2850 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:10:35.666989 kubelet[2850]: I0912 17:10:35.666461 2850 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:35.666989 kubelet[2850]: I0912 17:10:35.666508 2850 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:10:35.666989 kubelet[2850]: I0912 17:10:35.666533 2850 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:35.676360 kubelet[2850]: W0912 17:10:35.676279 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-10&limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:35.676685 kubelet[2850]: E0912 17:10:35.676624 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-10&limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:35.677390 kubelet[2850]: I0912 17:10:35.676997 2850 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:35.678196 kubelet[2850]: I0912 17:10:35.678166 2850 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:10:35.678540 kubelet[2850]: W0912 17:10:35.678518 2850 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:10:35.682041 kubelet[2850]: I0912 17:10:35.681993 2850 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:10:35.682264 kubelet[2850]: I0912 17:10:35.682243 2850 server.go:1287] "Started kubelet" Sep 12 17:10:35.692134 kubelet[2850]: W0912 17:10:35.692007 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:35.692134 kubelet[2850]: E0912 17:10:35.692123 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:35.693472 kubelet[2850]: I0912 17:10:35.693418 2850 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:35.696407 kubelet[2850]: I0912 17:10:35.696343 2850 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:35.699571 kubelet[2850]: I0912 17:10:35.699533 2850 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:10:35.706033 kubelet[2850]: I0912 17:10:35.705916 2850 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:35.707491 kubelet[2850]: I0912 17:10:35.706825 2850 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:35.710897 kubelet[2850]: I0912 17:10:35.710849 2850 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:35.715743 kubelet[2850]: I0912 17:10:35.711560 2850 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:10:35.716361 kubelet[2850]: I0912 17:10:35.711574 2850 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:10:35.716548 kubelet[2850]: E0912 17:10:35.711775 2850 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-10\" not found" Sep 12 17:10:35.716673 kubelet[2850]: E0912 17:10:35.708536 2850 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.10:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-10.186498242a985f96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-10,UID:ip-172-31-22-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-10,},FirstTimestamp:2025-09-12 17:10:35.682209686 +0000 UTC m=+1.821080890,LastTimestamp:2025-09-12 17:10:35.682209686 +0000 UTC m=+1.821080890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-10,}" Sep 12 17:10:35.716931 kubelet[2850]: I0912 17:10:35.716908 2850 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:35.719246 kubelet[2850]: W0912 17:10:35.719152 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:35.719787 kubelet[2850]: E0912 17:10:35.719524 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:35.720060 kubelet[2850]: E0912 17:10:35.720019 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-10?timeout=10s\": dial tcp 172.31.22.10:6443: connect: connection refused" interval="200ms" Sep 12 17:10:35.722929 kubelet[2850]: I0912 17:10:35.722125 2850 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:10:35.722929 kubelet[2850]: I0912 17:10:35.722277 2850 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:35.726473 kubelet[2850]: I0912 17:10:35.725132 2850 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:10:35.729245 kubelet[2850]: E0912 17:10:35.729199 2850 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:10:35.746023 kubelet[2850]: I0912 17:10:35.745938 2850 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:35.752392 kubelet[2850]: I0912 17:10:35.751907 2850 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:35.752392 kubelet[2850]: I0912 17:10:35.751958 2850 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:10:35.752392 kubelet[2850]: I0912 17:10:35.751990 2850 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:10:35.752392 kubelet[2850]: I0912 17:10:35.752006 2850 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:10:35.752392 kubelet[2850]: E0912 17:10:35.752069 2850 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:35.753369 kubelet[2850]: W0912 17:10:35.753217 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:35.753369 kubelet[2850]: E0912 17:10:35.753301 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:35.765494 kubelet[2850]: I0912 17:10:35.765296 2850 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:10:35.765494 kubelet[2850]: I0912 17:10:35.765331 2850 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:35.765494 kubelet[2850]: I0912 17:10:35.765367 2850 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:35.768769 kubelet[2850]: I0912 17:10:35.768728 2850 policy_none.go:49] "None policy: Start" Sep 12 17:10:35.768769 kubelet[2850]: I0912 17:10:35.768771 2850 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:10:35.768964 kubelet[2850]: I0912 17:10:35.768796 2850 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:35.778884 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:10:35.793710 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:10:35.800391 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:10:35.813178 kubelet[2850]: I0912 17:10:35.813044 2850 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:10:35.813579 kubelet[2850]: I0912 17:10:35.813343 2850 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:35.813579 kubelet[2850]: I0912 17:10:35.813378 2850 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:35.814527 kubelet[2850]: I0912 17:10:35.814067 2850 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:35.816427 kubelet[2850]: E0912 17:10:35.816357 2850 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:10:35.816763 kubelet[2850]: E0912 17:10:35.816601 2850 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-10\" not found" Sep 12 17:10:35.816953 kubelet[2850]: E0912 17:10:35.816867 2850 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-10\" not found" Sep 12 17:10:35.872004 systemd[1]: Created slice kubepods-burstable-podc7496d54ed99feaa0a235bb4a92d5326.slice - libcontainer container kubepods-burstable-podc7496d54ed99feaa0a235bb4a92d5326.slice. Sep 12 17:10:35.884974 kubelet[2850]: E0912 17:10:35.884914 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:35.892043 systemd[1]: Created slice kubepods-burstable-podf05a602b53919db2d0d005089ba7791a.slice - libcontainer container kubepods-burstable-podf05a602b53919db2d0d005089ba7791a.slice. Sep 12 17:10:35.901672 kubelet[2850]: E0912 17:10:35.901614 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:35.904008 systemd[1]: Created slice kubepods-burstable-podd6f02fe9e806de5877611f9805793eca.slice - libcontainer container kubepods-burstable-podd6f02fe9e806de5877611f9805793eca.slice. Sep 12 17:10:35.908037 kubelet[2850]: E0912 17:10:35.907994 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:35.918254 kubelet[2850]: I0912 17:10:35.917801 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7496d54ed99feaa0a235bb4a92d5326-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-10\" (UID: \"c7496d54ed99feaa0a235bb4a92d5326\") " pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:35.918254 kubelet[2850]: I0912 17:10:35.917859 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:35.918254 kubelet[2850]: I0912 17:10:35.917899 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:35.918254 kubelet[2850]: I0912 17:10:35.917941 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6f02fe9e806de5877611f9805793eca-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-10\" (UID: \"d6f02fe9e806de5877611f9805793eca\") " pod="kube-system/kube-scheduler-ip-172-31-22-10" Sep 12 17:10:35.918254 kubelet[2850]: I0912 17:10:35.917977 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7496d54ed99feaa0a235bb4a92d5326-ca-certs\") pod \"kube-apiserver-ip-172-31-22-10\" (UID: \"c7496d54ed99feaa0a235bb4a92d5326\") " pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:35.918771 kubelet[2850]: I0912 17:10:35.918011 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7496d54ed99feaa0a235bb4a92d5326-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-10\" (UID: \"c7496d54ed99feaa0a235bb4a92d5326\") " pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:35.918771 kubelet[2850]: I0912 17:10:35.918048 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:35.918771 kubelet[2850]: I0912 17:10:35.918084 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:35.918771 kubelet[2850]: I0912 17:10:35.918119 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:35.920696 kubelet[2850]: E0912 17:10:35.920628 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-10?timeout=10s\": dial tcp 172.31.22.10:6443: connect: connection refused" interval="400ms" Sep 12 17:10:35.921509 kubelet[2850]: I0912 17:10:35.920816 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-10" Sep 12 17:10:35.921509 kubelet[2850]: E0912 17:10:35.921322 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.10:6443/api/v1/nodes\": dial tcp 172.31.22.10:6443: connect: connection refused" node="ip-172-31-22-10" Sep 12 17:10:36.123672 kubelet[2850]: I0912 17:10:36.123625 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-10" Sep 12 17:10:36.124342 kubelet[2850]: E0912 17:10:36.124293 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.10:6443/api/v1/nodes\": dial tcp 172.31.22.10:6443: connect: connection refused" node="ip-172-31-22-10" Sep 12 17:10:36.188526 containerd[2023]: time="2025-09-12T17:10:36.187364677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-10,Uid:c7496d54ed99feaa0a235bb4a92d5326,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:36.203323 containerd[2023]: time="2025-09-12T17:10:36.203204929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-10,Uid:f05a602b53919db2d0d005089ba7791a,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:36.210118 containerd[2023]: time="2025-09-12T17:10:36.209705317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-10,Uid:d6f02fe9e806de5877611f9805793eca,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:36.322029 kubelet[2850]: E0912 17:10:36.321980 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-10?timeout=10s\": dial tcp 172.31.22.10:6443: connect: connection refused" interval="800ms" Sep 12 17:10:36.527187 kubelet[2850]: I0912 17:10:36.527040 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-10" Sep 12 17:10:36.527920 kubelet[2850]: E0912 17:10:36.527868 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.10:6443/api/v1/nodes\": dial tcp 172.31.22.10:6443: connect: connection refused" node="ip-172-31-22-10" Sep 12 17:10:36.761789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205284969.mount: Deactivated successfully. Sep 12 17:10:36.769624 containerd[2023]: time="2025-09-12T17:10:36.769554303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:36.773734 containerd[2023]: time="2025-09-12T17:10:36.773677047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:10:36.775417 containerd[2023]: time="2025-09-12T17:10:36.774557295Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:36.776496 containerd[2023]: time="2025-09-12T17:10:36.776429344Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:36.777823 containerd[2023]: time="2025-09-12T17:10:36.777691936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:36.779211 containerd[2023]: time="2025-09-12T17:10:36.778998544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:36.779211 containerd[2023]: time="2025-09-12T17:10:36.779160640Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:36.787887 containerd[2023]: time="2025-09-12T17:10:36.787830376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:36.791480 containerd[2023]: time="2025-09-12T17:10:36.791407960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.864471ms" Sep 12 17:10:36.796236 containerd[2023]: time="2025-09-12T17:10:36.796169692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.844199ms" Sep 12 17:10:36.796763 containerd[2023]: time="2025-09-12T17:10:36.796527100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.673751ms" Sep 12 17:10:36.911406 kubelet[2850]: W0912 17:10:36.911292 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:36.911406 kubelet[2850]: E0912 17:10:36.911361 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:36.959624 kubelet[2850]: W0912 17:10:36.959378 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-10&limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:36.959624 kubelet[2850]: E0912 17:10:36.959533 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-10&limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:36.993936 containerd[2023]: time="2025-09-12T17:10:36.993787841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:36.994109 containerd[2023]: time="2025-09-12T17:10:36.993969929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:36.994109 containerd[2023]: time="2025-09-12T17:10:36.994073069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:36.995230 containerd[2023]: time="2025-09-12T17:10:36.994675229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:37.006383 containerd[2023]: time="2025-09-12T17:10:37.005890345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:37.006383 containerd[2023]: time="2025-09-12T17:10:37.005969581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:37.006383 containerd[2023]: time="2025-09-12T17:10:37.006052165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:37.006716 containerd[2023]: time="2025-09-12T17:10:37.006527905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:37.011987 containerd[2023]: time="2025-09-12T17:10:37.011826709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:37.012265 containerd[2023]: time="2025-09-12T17:10:37.011950633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:37.013389 containerd[2023]: time="2025-09-12T17:10:37.013034113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:37.013389 containerd[2023]: time="2025-09-12T17:10:37.013263277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:37.057774 systemd[1]: Started cri-containerd-41b33c90d1467675003fd041abe51e01610d79b98c38b002e790df71e4a3589f.scope - libcontainer container 41b33c90d1467675003fd041abe51e01610d79b98c38b002e790df71e4a3589f. Sep 12 17:10:37.076763 systemd[1]: Started cri-containerd-20c41a351611fbeee01e8b2d4c61f075095b9be90793466ba87a0ff1772c6912.scope - libcontainer container 20c41a351611fbeee01e8b2d4c61f075095b9be90793466ba87a0ff1772c6912. Sep 12 17:10:37.082324 systemd[1]: Started cri-containerd-89145df897331a2d0ad69acb39b0ad909ca008077bd27c34caa18032e74f3029.scope - libcontainer container 89145df897331a2d0ad69acb39b0ad909ca008077bd27c34caa18032e74f3029. Sep 12 17:10:37.095526 kubelet[2850]: W0912 17:10:37.094894 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:37.095526 kubelet[2850]: E0912 17:10:37.095002 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:37.123851 kubelet[2850]: E0912 17:10:37.123780 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-10?timeout=10s\": dial tcp 172.31.22.10:6443: connect: connection refused" interval="1.6s" Sep 12 17:10:37.204841 containerd[2023]: time="2025-09-12T17:10:37.204620102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-10,Uid:d6f02fe9e806de5877611f9805793eca,Namespace:kube-system,Attempt:0,} returns sandbox id \"89145df897331a2d0ad69acb39b0ad909ca008077bd27c34caa18032e74f3029\"" Sep 12 17:10:37.214467 containerd[2023]: time="2025-09-12T17:10:37.214299494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-10,Uid:c7496d54ed99feaa0a235bb4a92d5326,Namespace:kube-system,Attempt:0,} returns sandbox id \"20c41a351611fbeee01e8b2d4c61f075095b9be90793466ba87a0ff1772c6912\"" Sep 12 17:10:37.221305 containerd[2023]: time="2025-09-12T17:10:37.221241470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-10,Uid:f05a602b53919db2d0d005089ba7791a,Namespace:kube-system,Attempt:0,} returns sandbox id \"41b33c90d1467675003fd041abe51e01610d79b98c38b002e790df71e4a3589f\"" Sep 12 17:10:37.227648 containerd[2023]: time="2025-09-12T17:10:37.227581082Z" level=info msg="CreateContainer within sandbox \"89145df897331a2d0ad69acb39b0ad909ca008077bd27c34caa18032e74f3029\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:10:37.245851 containerd[2023]: time="2025-09-12T17:10:37.245632718Z" level=info msg="CreateContainer within sandbox \"41b33c90d1467675003fd041abe51e01610d79b98c38b002e790df71e4a3589f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:10:37.246224 containerd[2023]: time="2025-09-12T17:10:37.246017018Z" level=info msg="CreateContainer within sandbox \"20c41a351611fbeee01e8b2d4c61f075095b9be90793466ba87a0ff1772c6912\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:10:37.251117 containerd[2023]: time="2025-09-12T17:10:37.251060810Z" level=info msg="CreateContainer within sandbox \"89145df897331a2d0ad69acb39b0ad909ca008077bd27c34caa18032e74f3029\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8\"" Sep 12 17:10:37.252463 containerd[2023]: time="2025-09-12T17:10:37.252393122Z" level=info msg="StartContainer for \"fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8\"" Sep 12 17:10:37.268928 containerd[2023]: time="2025-09-12T17:10:37.268791038Z" level=info msg="CreateContainer within sandbox \"41b33c90d1467675003fd041abe51e01610d79b98c38b002e790df71e4a3589f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042\"" Sep 12 17:10:37.271124 containerd[2023]: time="2025-09-12T17:10:37.270863078Z" level=info msg="StartContainer for \"fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042\"" Sep 12 17:10:37.280869 containerd[2023]: time="2025-09-12T17:10:37.280809782Z" level=info msg="CreateContainer within sandbox \"20c41a351611fbeee01e8b2d4c61f075095b9be90793466ba87a0ff1772c6912\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4cde9b9262c8bbd511955f63a99672fd6253aa2bb630aa1d3f18a5ab1c7afe71\"" Sep 12 17:10:37.281951 containerd[2023]: time="2025-09-12T17:10:37.281857334Z" level=info msg="StartContainer for \"4cde9b9262c8bbd511955f63a99672fd6253aa2bb630aa1d3f18a5ab1c7afe71\"" Sep 12 17:10:37.321776 systemd[1]: Started cri-containerd-fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8.scope - libcontainer container fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8. Sep 12 17:10:37.332492 kubelet[2850]: I0912 17:10:37.331897 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-10" Sep 12 17:10:37.334409 kubelet[2850]: E0912 17:10:37.334233 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.10:6443/api/v1/nodes\": dial tcp 172.31.22.10:6443: connect: connection refused" node="ip-172-31-22-10" Sep 12 17:10:37.353371 kubelet[2850]: W0912 17:10:37.353292 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.10:6443: connect: connection refused Sep 12 17:10:37.354538 kubelet[2850]: E0912 17:10:37.354170 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:37.365850 systemd[1]: Started cri-containerd-fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042.scope - libcontainer container fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042. Sep 12 17:10:37.388757 systemd[1]: Started cri-containerd-4cde9b9262c8bbd511955f63a99672fd6253aa2bb630aa1d3f18a5ab1c7afe71.scope - libcontainer container 4cde9b9262c8bbd511955f63a99672fd6253aa2bb630aa1d3f18a5ab1c7afe71. Sep 12 17:10:37.402859 kubelet[2850]: E0912 17:10:37.402682 2850 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.10:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-10.186498242a985f96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-10,UID:ip-172-31-22-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-10,},FirstTimestamp:2025-09-12 17:10:35.682209686 +0000 UTC m=+1.821080890,LastTimestamp:2025-09-12 17:10:35.682209686 +0000 UTC m=+1.821080890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-10,}" Sep 12 17:10:37.483839 containerd[2023]: time="2025-09-12T17:10:37.483763491Z" level=info msg="StartContainer for \"fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042\" returns successfully" Sep 12 17:10:37.498841 containerd[2023]: time="2025-09-12T17:10:37.498132219Z" level=info msg="StartContainer for \"fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8\" returns successfully" Sep 12 17:10:37.527389 containerd[2023]: time="2025-09-12T17:10:37.527308167Z" level=info msg="StartContainer for \"4cde9b9262c8bbd511955f63a99672fd6253aa2bb630aa1d3f18a5ab1c7afe71\" returns successfully" Sep 12 17:10:37.656504 kubelet[2850]: E0912 17:10:37.655796 2850 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.10:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:37.778065 kubelet[2850]: E0912 17:10:37.778002 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:37.786326 kubelet[2850]: E0912 17:10:37.786072 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:37.791951 kubelet[2850]: E0912 17:10:37.791916 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:38.794352 kubelet[2850]: E0912 17:10:38.794066 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:38.794352 kubelet[2850]: E0912 17:10:38.794141 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:38.937489 kubelet[2850]: I0912 17:10:38.937366 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-10" Sep 12 17:10:41.338833 kubelet[2850]: E0912 17:10:41.338770 2850 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-10\" not found" node="ip-172-31-22-10" Sep 12 17:10:41.373953 kubelet[2850]: I0912 17:10:41.373885 2850 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-10" Sep 12 17:10:41.413118 kubelet[2850]: I0912 17:10:41.412618 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:41.454421 kubelet[2850]: E0912 17:10:41.454363 2850 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:41.455576 kubelet[2850]: I0912 17:10:41.454658 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:41.465313 kubelet[2850]: E0912 17:10:41.465022 2850 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:41.465313 kubelet[2850]: I0912 17:10:41.465069 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-10" Sep 12 17:10:41.472685 kubelet[2850]: E0912 17:10:41.472614 2850 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-10" Sep 12 17:10:41.690504 kubelet[2850]: I0912 17:10:41.690063 2850 apiserver.go:52] "Watching apiserver" Sep 12 17:10:41.716858 kubelet[2850]: I0912 17:10:41.716790 2850 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:10:43.356338 systemd[1]: Reloading requested from client PID 3126 ('systemctl') (unit session-7.scope)... Sep 12 17:10:43.356371 systemd[1]: Reloading... Sep 12 17:10:43.634515 zram_generator::config[3166]: No configuration found. Sep 12 17:10:43.900994 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:44.112693 systemd[1]: Reloading finished in 755 ms. Sep 12 17:10:44.187293 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:44.205663 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:10:44.206263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:44.206520 systemd[1]: kubelet.service: Consumed 2.521s CPU time, 128.0M memory peak, 0B memory swap peak. Sep 12 17:10:44.220780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:45.049900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:45.064530 (kubelet)[3226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:45.209816 kubelet[3226]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:45.209816 kubelet[3226]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:45.209816 kubelet[3226]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:45.210387 kubelet[3226]: I0912 17:10:45.210007 3226 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:45.241236 kubelet[3226]: I0912 17:10:45.241172 3226 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:10:45.241236 kubelet[3226]: I0912 17:10:45.241222 3226 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:45.243183 kubelet[3226]: I0912 17:10:45.242403 3226 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:10:45.254431 kubelet[3226]: I0912 17:10:45.253209 3226 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:10:45.269889 kubelet[3226]: I0912 17:10:45.269821 3226 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:45.281899 kubelet[3226]: E0912 17:10:45.281829 3226 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:45.281899 kubelet[3226]: I0912 17:10:45.281888 3226 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:45.298859 kubelet[3226]: I0912 17:10:45.298794 3226 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:45.301180 kubelet[3226]: I0912 17:10:45.301058 3226 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:45.302546 kubelet[3226]: I0912 17:10:45.301134 3226 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:10:45.302765 kubelet[3226]: I0912 17:10:45.302553 3226 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:45.302765 kubelet[3226]: I0912 17:10:45.302578 3226 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:10:45.302765 kubelet[3226]: I0912 17:10:45.302675 3226 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:45.304569 kubelet[3226]: I0912 17:10:45.302965 3226 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:10:45.304569 kubelet[3226]: I0912 17:10:45.303002 3226 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:45.304569 kubelet[3226]: I0912 17:10:45.303034 3226 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:10:45.304569 kubelet[3226]: I0912 17:10:45.303054 3226 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:45.310846 kubelet[3226]: I0912 17:10:45.310807 3226 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:45.312485 kubelet[3226]: I0912 17:10:45.311791 3226 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:10:45.314174 kubelet[3226]: I0912 17:10:45.313398 3226 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:10:45.314561 kubelet[3226]: I0912 17:10:45.314477 3226 server.go:1287] "Started kubelet" Sep 12 17:10:45.324570 kubelet[3226]: I0912 17:10:45.324112 3226 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:45.324913 kubelet[3226]: I0912 17:10:45.324889 3226 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:45.328477 kubelet[3226]: I0912 17:10:45.327518 3226 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:45.330906 kubelet[3226]: I0912 17:10:45.330426 3226 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:45.334622 kubelet[3226]: I0912 17:10:45.331771 3226 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:45.351250 kubelet[3226]: I0912 17:10:45.334984 3226 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:10:45.354417 kubelet[3226]: I0912 17:10:45.335004 3226 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:10:45.354417 kubelet[3226]: I0912 17:10:45.341143 3226 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:10:45.354794 kubelet[3226]: E0912 17:10:45.342511 3226 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-10\" not found" Sep 12 17:10:45.355415 kubelet[3226]: I0912 17:10:45.355240 3226 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:45.398474 kubelet[3226]: I0912 17:10:45.397710 3226 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:10:45.399528 kubelet[3226]: I0912 17:10:45.398677 3226 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:10:45.399528 kubelet[3226]: I0912 17:10:45.398866 3226 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:45.416726 kubelet[3226]: I0912 17:10:45.416659 3226 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:45.441136 kubelet[3226]: I0912 17:10:45.441063 3226 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:45.458288 kubelet[3226]: I0912 17:10:45.458255 3226 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:10:45.459570 kubelet[3226]: I0912 17:10:45.459531 3226 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:10:45.459754 kubelet[3226]: I0912 17:10:45.459735 3226 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:10:45.460487 kubelet[3226]: E0912 17:10:45.459904 3226 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:45.476060 kubelet[3226]: E0912 17:10:45.458203 3226 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:10:45.560778 kubelet[3226]: E0912 17:10:45.560600 3226 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:10:45.617928 kubelet[3226]: I0912 17:10:45.617854 3226 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:10:45.617928 kubelet[3226]: I0912 17:10:45.617917 3226 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:45.618128 kubelet[3226]: I0912 17:10:45.617954 3226 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:45.618699 kubelet[3226]: I0912 17:10:45.618232 3226 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:10:45.618699 kubelet[3226]: I0912 17:10:45.618266 3226 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:10:45.618699 kubelet[3226]: I0912 17:10:45.618303 3226 policy_none.go:49] "None policy: Start" Sep 12 17:10:45.618699 kubelet[3226]: I0912 17:10:45.618322 3226 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:10:45.618699 kubelet[3226]: I0912 17:10:45.618345 3226 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:45.618699 kubelet[3226]: I0912 17:10:45.618573 3226 state_mem.go:75] "Updated machine memory state" Sep 12 17:10:45.632967 kubelet[3226]: I0912 17:10:45.632791 3226 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:10:45.633413 kubelet[3226]: I0912 17:10:45.633166 3226 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:45.633413 kubelet[3226]: I0912 17:10:45.633201 3226 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:45.638714 kubelet[3226]: I0912 17:10:45.638550 3226 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:45.642726 kubelet[3226]: E0912 17:10:45.642676 3226 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:10:45.763093 kubelet[3226]: I0912 17:10:45.761875 3226 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-10" Sep 12 17:10:45.764814 kubelet[3226]: I0912 17:10:45.764781 3226 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:45.767994 kubelet[3226]: I0912 17:10:45.766357 3226 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:45.767994 kubelet[3226]: I0912 17:10:45.766426 3226 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-10" Sep 12 17:10:45.791219 kubelet[3226]: I0912 17:10:45.790764 3226 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-10" Sep 12 17:10:45.791219 kubelet[3226]: I0912 17:10:45.790877 3226 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-10" Sep 12 17:10:45.857649 kubelet[3226]: I0912 17:10:45.856935 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7496d54ed99feaa0a235bb4a92d5326-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-10\" (UID: \"c7496d54ed99feaa0a235bb4a92d5326\") " pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:45.857649 kubelet[3226]: I0912 17:10:45.856996 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:45.857649 kubelet[3226]: I0912 17:10:45.857032 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7496d54ed99feaa0a235bb4a92d5326-ca-certs\") pod \"kube-apiserver-ip-172-31-22-10\" (UID: \"c7496d54ed99feaa0a235bb4a92d5326\") " pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:45.857649 kubelet[3226]: I0912 17:10:45.857066 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:45.857649 kubelet[3226]: I0912 17:10:45.857108 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:45.858007 kubelet[3226]: I0912 17:10:45.857149 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6f02fe9e806de5877611f9805793eca-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-10\" (UID: \"d6f02fe9e806de5877611f9805793eca\") " pod="kube-system/kube-scheduler-ip-172-31-22-10" Sep 12 17:10:45.858007 kubelet[3226]: I0912 17:10:45.857186 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7496d54ed99feaa0a235bb4a92d5326-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-10\" (UID: \"c7496d54ed99feaa0a235bb4a92d5326\") " pod="kube-system/kube-apiserver-ip-172-31-22-10" Sep 12 17:10:45.858007 kubelet[3226]: I0912 17:10:45.857222 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:45.858007 kubelet[3226]: I0912 17:10:45.857257 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f05a602b53919db2d0d005089ba7791a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-10\" (UID: \"f05a602b53919db2d0d005089ba7791a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-10" Sep 12 17:10:46.308976 kubelet[3226]: I0912 17:10:46.308628 3226 apiserver.go:52] "Watching apiserver" Sep 12 17:10:46.353930 kubelet[3226]: I0912 17:10:46.353840 3226 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:10:46.525648 kubelet[3226]: I0912 17:10:46.524542 3226 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-10" Sep 12 17:10:46.538750 kubelet[3226]: E0912 17:10:46.538482 3226 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-10\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-10" Sep 12 17:10:46.591647 kubelet[3226]: I0912 17:10:46.590620 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-10" podStartSLOduration=1.590596692 podStartE2EDuration="1.590596692s" podCreationTimestamp="2025-09-12 17:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:46.57483102 +0000 UTC m=+1.497961664" watchObservedRunningTime="2025-09-12 17:10:46.590596692 +0000 UTC m=+1.513727324" Sep 12 17:10:46.591647 kubelet[3226]: I0912 17:10:46.590811 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-10" podStartSLOduration=1.5908028760000001 podStartE2EDuration="1.590802876s" podCreationTimestamp="2025-09-12 17:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:46.589391904 +0000 UTC m=+1.512522548" watchObservedRunningTime="2025-09-12 17:10:46.590802876 +0000 UTC m=+1.513933496" Sep 12 17:10:46.636026 kubelet[3226]: I0912 17:10:46.635939 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-10" podStartSLOduration=1.635914692 podStartE2EDuration="1.635914692s" podCreationTimestamp="2025-09-12 17:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:46.60887694 +0000 UTC m=+1.532007560" watchObservedRunningTime="2025-09-12 17:10:46.635914692 +0000 UTC m=+1.559045336" Sep 12 17:10:46.738538 update_engine[2000]: I20250912 17:10:46.737941 2000 update_attempter.cc:509] Updating boot flags... Sep 12 17:10:46.881522 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3278) Sep 12 17:10:47.405718 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3281) Sep 12 17:10:49.811158 kubelet[3226]: I0912 17:10:49.810630 3226 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:10:49.812241 kubelet[3226]: I0912 17:10:49.811856 3226 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:10:49.812313 containerd[2023]: time="2025-09-12T17:10:49.811244644Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:10:50.569081 systemd[1]: Created slice kubepods-besteffort-pod9d292c97_0686_46d3_a97c_f6c8d443fa07.slice - libcontainer container kubepods-besteffort-pod9d292c97_0686_46d3_a97c_f6c8d443fa07.slice. Sep 12 17:10:50.602091 kubelet[3226]: I0912 17:10:50.602023 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d292c97-0686-46d3-a97c-f6c8d443fa07-lib-modules\") pod \"kube-proxy-hg6jm\" (UID: \"9d292c97-0686-46d3-a97c-f6c8d443fa07\") " pod="kube-system/kube-proxy-hg6jm" Sep 12 17:10:50.602242 kubelet[3226]: I0912 17:10:50.602103 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d292c97-0686-46d3-a97c-f6c8d443fa07-xtables-lock\") pod \"kube-proxy-hg6jm\" (UID: \"9d292c97-0686-46d3-a97c-f6c8d443fa07\") " pod="kube-system/kube-proxy-hg6jm" Sep 12 17:10:50.602242 kubelet[3226]: I0912 17:10:50.602148 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92msv\" (UniqueName: \"kubernetes.io/projected/9d292c97-0686-46d3-a97c-f6c8d443fa07-kube-api-access-92msv\") pod \"kube-proxy-hg6jm\" (UID: \"9d292c97-0686-46d3-a97c-f6c8d443fa07\") " pod="kube-system/kube-proxy-hg6jm" Sep 12 17:10:50.602242 kubelet[3226]: I0912 17:10:50.602189 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d292c97-0686-46d3-a97c-f6c8d443fa07-kube-proxy\") pod \"kube-proxy-hg6jm\" (UID: \"9d292c97-0686-46d3-a97c-f6c8d443fa07\") " pod="kube-system/kube-proxy-hg6jm" Sep 12 17:10:50.885190 containerd[2023]: time="2025-09-12T17:10:50.884416614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hg6jm,Uid:9d292c97-0686-46d3-a97c-f6c8d443fa07,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:50.952695 systemd[1]: Created slice kubepods-besteffort-podb0bb2b1f_2feb_4e8a_a35c_a6bf8d9a4126.slice - libcontainer container kubepods-besteffort-podb0bb2b1f_2feb_4e8a_a35c_a6bf8d9a4126.slice. Sep 12 17:10:50.955000 containerd[2023]: time="2025-09-12T17:10:50.953388306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:50.955000 containerd[2023]: time="2025-09-12T17:10:50.953528478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:50.955000 containerd[2023]: time="2025-09-12T17:10:50.953567118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:50.955000 containerd[2023]: time="2025-09-12T17:10:50.953727966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:51.005798 systemd[1]: Started cri-containerd-8550900c5c6c41ee63057f5d8769b3dab23230abc6040603ff580f9e97dc8e54.scope - libcontainer container 8550900c5c6c41ee63057f5d8769b3dab23230abc6040603ff580f9e97dc8e54. Sep 12 17:10:51.012849 kubelet[3226]: I0912 17:10:51.012529 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rpws\" (UniqueName: \"kubernetes.io/projected/b0bb2b1f-2feb-4e8a-a35c-a6bf8d9a4126-kube-api-access-2rpws\") pod \"tigera-operator-755d956888-wlm5z\" (UID: \"b0bb2b1f-2feb-4e8a-a35c-a6bf8d9a4126\") " pod="tigera-operator/tigera-operator-755d956888-wlm5z" Sep 12 17:10:51.012849 kubelet[3226]: I0912 17:10:51.012653 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0bb2b1f-2feb-4e8a-a35c-a6bf8d9a4126-var-lib-calico\") pod \"tigera-operator-755d956888-wlm5z\" (UID: \"b0bb2b1f-2feb-4e8a-a35c-a6bf8d9a4126\") " pod="tigera-operator/tigera-operator-755d956888-wlm5z" Sep 12 17:10:51.058394 containerd[2023]: time="2025-09-12T17:10:51.058192874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hg6jm,Uid:9d292c97-0686-46d3-a97c-f6c8d443fa07,Namespace:kube-system,Attempt:0,} returns sandbox id \"8550900c5c6c41ee63057f5d8769b3dab23230abc6040603ff580f9e97dc8e54\"" Sep 12 17:10:51.064695 containerd[2023]: time="2025-09-12T17:10:51.064644194Z" level=info msg="CreateContainer within sandbox \"8550900c5c6c41ee63057f5d8769b3dab23230abc6040603ff580f9e97dc8e54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:10:51.086953 containerd[2023]: time="2025-09-12T17:10:51.086794503Z" level=info msg="CreateContainer within sandbox \"8550900c5c6c41ee63057f5d8769b3dab23230abc6040603ff580f9e97dc8e54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ba196c2dd9f0d3559a599099a468ac5b373821e6d80f17e5c641da396bfe9af7\"" Sep 12 17:10:51.089829 containerd[2023]: time="2025-09-12T17:10:51.089775747Z" level=info msg="StartContainer for \"ba196c2dd9f0d3559a599099a468ac5b373821e6d80f17e5c641da396bfe9af7\"" Sep 12 17:10:51.149758 systemd[1]: Started cri-containerd-ba196c2dd9f0d3559a599099a468ac5b373821e6d80f17e5c641da396bfe9af7.scope - libcontainer container ba196c2dd9f0d3559a599099a468ac5b373821e6d80f17e5c641da396bfe9af7. Sep 12 17:10:51.210084 containerd[2023]: time="2025-09-12T17:10:51.209880159Z" level=info msg="StartContainer for \"ba196c2dd9f0d3559a599099a468ac5b373821e6d80f17e5c641da396bfe9af7\" returns successfully" Sep 12 17:10:51.264955 containerd[2023]: time="2025-09-12T17:10:51.264783879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wlm5z,Uid:b0bb2b1f-2feb-4e8a-a35c-a6bf8d9a4126,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:10:51.316983 containerd[2023]: time="2025-09-12T17:10:51.316582600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:51.316983 containerd[2023]: time="2025-09-12T17:10:51.316680388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:51.316983 containerd[2023]: time="2025-09-12T17:10:51.316718920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:51.316983 containerd[2023]: time="2025-09-12T17:10:51.316885360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:51.354038 systemd[1]: Started cri-containerd-ccad70116b0d94174754533ad495e5886424a92036e0bb9b28c09d6905814e31.scope - libcontainer container ccad70116b0d94174754533ad495e5886424a92036e0bb9b28c09d6905814e31. Sep 12 17:10:51.422649 containerd[2023]: time="2025-09-12T17:10:51.421693732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wlm5z,Uid:b0bb2b1f-2feb-4e8a-a35c-a6bf8d9a4126,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ccad70116b0d94174754533ad495e5886424a92036e0bb9b28c09d6905814e31\"" Sep 12 17:10:51.428858 containerd[2023]: time="2025-09-12T17:10:51.428355460Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:10:52.207765 kubelet[3226]: I0912 17:10:52.207614 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hg6jm" podStartSLOduration=2.207590212 podStartE2EDuration="2.207590212s" podCreationTimestamp="2025-09-12 17:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:51.558422573 +0000 UTC m=+6.481553217" watchObservedRunningTime="2025-09-12 17:10:52.207590212 +0000 UTC m=+7.130720832" Sep 12 17:10:52.807412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440277925.mount: Deactivated successfully. Sep 12 17:10:53.537980 containerd[2023]: time="2025-09-12T17:10:53.537923587Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:53.539437 containerd[2023]: time="2025-09-12T17:10:53.539384131Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:10:53.540713 containerd[2023]: time="2025-09-12T17:10:53.540654811Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:53.551470 containerd[2023]: time="2025-09-12T17:10:53.549057643Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:53.551709 containerd[2023]: time="2025-09-12T17:10:53.551658067Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.123097323s" Sep 12 17:10:53.551878 containerd[2023]: time="2025-09-12T17:10:53.551829823Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:10:53.558819 containerd[2023]: time="2025-09-12T17:10:53.558763831Z" level=info msg="CreateContainer within sandbox \"ccad70116b0d94174754533ad495e5886424a92036e0bb9b28c09d6905814e31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:10:53.583145 containerd[2023]: time="2025-09-12T17:10:53.583093843Z" level=info msg="CreateContainer within sandbox \"ccad70116b0d94174754533ad495e5886424a92036e0bb9b28c09d6905814e31\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125\"" Sep 12 17:10:53.584536 containerd[2023]: time="2025-09-12T17:10:53.584476903Z" level=info msg="StartContainer for \"4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125\"" Sep 12 17:10:53.634099 systemd[1]: Started cri-containerd-4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125.scope - libcontainer container 4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125. Sep 12 17:10:53.681051 containerd[2023]: time="2025-09-12T17:10:53.680975647Z" level=info msg="StartContainer for \"4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125\" returns successfully" Sep 12 17:10:55.785810 kubelet[3226]: I0912 17:10:55.785707 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-wlm5z" podStartSLOduration=3.656169271 podStartE2EDuration="5.785685814s" podCreationTimestamp="2025-09-12 17:10:50 +0000 UTC" firstStartedPulling="2025-09-12 17:10:51.42504442 +0000 UTC m=+6.348175028" lastFinishedPulling="2025-09-12 17:10:53.554560951 +0000 UTC m=+8.477691571" observedRunningTime="2025-09-12 17:10:54.569089484 +0000 UTC m=+9.492220128" watchObservedRunningTime="2025-09-12 17:10:55.785685814 +0000 UTC m=+10.708816446" Sep 12 17:11:02.366858 sudo[2340]: pam_unix(sudo:session): session closed for user root Sep 12 17:11:02.394783 sshd[2337]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:02.401967 systemd[1]: sshd@6-172.31.22.10:22-147.75.109.163:34648.service: Deactivated successfully. Sep 12 17:11:02.409145 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:11:02.409612 systemd[1]: session-7.scope: Consumed 11.891s CPU time, 150.9M memory peak, 0B memory swap peak. Sep 12 17:11:02.415616 systemd-logind[1999]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:11:02.420007 systemd-logind[1999]: Removed session 7. Sep 12 17:11:11.322815 systemd[1]: Created slice kubepods-besteffort-pod30a4f0ba_830a_4f0b_8697_5fa46d82b8e9.slice - libcontainer container kubepods-besteffort-pod30a4f0ba_830a_4f0b_8697_5fa46d82b8e9.slice. Sep 12 17:11:11.351988 kubelet[3226]: I0912 17:11:11.351905 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4f0ba-830a-4f0b-8697-5fa46d82b8e9-tigera-ca-bundle\") pod \"calico-typha-5c646dbcf4-6vn54\" (UID: \"30a4f0ba-830a-4f0b-8697-5fa46d82b8e9\") " pod="calico-system/calico-typha-5c646dbcf4-6vn54" Sep 12 17:11:11.352694 kubelet[3226]: I0912 17:11:11.352014 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/30a4f0ba-830a-4f0b-8697-5fa46d82b8e9-typha-certs\") pod \"calico-typha-5c646dbcf4-6vn54\" (UID: \"30a4f0ba-830a-4f0b-8697-5fa46d82b8e9\") " pod="calico-system/calico-typha-5c646dbcf4-6vn54" Sep 12 17:11:11.352694 kubelet[3226]: I0912 17:11:11.352072 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vswv\" (UniqueName: \"kubernetes.io/projected/30a4f0ba-830a-4f0b-8697-5fa46d82b8e9-kube-api-access-7vswv\") pod \"calico-typha-5c646dbcf4-6vn54\" (UID: \"30a4f0ba-830a-4f0b-8697-5fa46d82b8e9\") " pod="calico-system/calico-typha-5c646dbcf4-6vn54" Sep 12 17:11:11.637555 containerd[2023]: time="2025-09-12T17:11:11.637239517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c646dbcf4-6vn54,Uid:30a4f0ba-830a-4f0b-8697-5fa46d82b8e9,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:11.708899 containerd[2023]: time="2025-09-12T17:11:11.707836429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:11.710562 containerd[2023]: time="2025-09-12T17:11:11.708528313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:11.710562 containerd[2023]: time="2025-09-12T17:11:11.708587821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:11.710562 containerd[2023]: time="2025-09-12T17:11:11.708770149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:11.753314 systemd[1]: Created slice kubepods-besteffort-pod3733b6a6_76f0_46c2_a577_a4b94dbe5edf.slice - libcontainer container kubepods-besteffort-pod3733b6a6_76f0_46c2_a577_a4b94dbe5edf.slice. Sep 12 17:11:11.754522 kubelet[3226]: I0912 17:11:11.753887 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-lib-modules\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754522 kubelet[3226]: I0912 17:11:11.753946 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-var-lib-calico\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754522 kubelet[3226]: I0912 17:11:11.753983 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-xtables-lock\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754522 kubelet[3226]: I0912 17:11:11.754020 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-node-certs\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754522 kubelet[3226]: I0912 17:11:11.754065 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-var-run-calico\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754813 kubelet[3226]: I0912 17:11:11.754106 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-cni-log-dir\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754813 kubelet[3226]: I0912 17:11:11.754144 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-policysync\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754813 kubelet[3226]: I0912 17:11:11.754185 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-cni-bin-dir\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.754813 kubelet[3226]: I0912 17:11:11.754219 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-cni-net-dir\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.761476 kubelet[3226]: I0912 17:11:11.754254 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-flexvol-driver-host\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.761476 kubelet[3226]: I0912 17:11:11.756984 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-tigera-ca-bundle\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.761753 kubelet[3226]: I0912 17:11:11.757155 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9bph\" (UniqueName: \"kubernetes.io/projected/3733b6a6-76f0-46c2-a577-a4b94dbe5edf-kube-api-access-b9bph\") pod \"calico-node-jbw2x\" (UID: \"3733b6a6-76f0-46c2-a577-a4b94dbe5edf\") " pod="calico-system/calico-node-jbw2x" Sep 12 17:11:11.793744 systemd[1]: Started cri-containerd-914c881a6547ef1b2a44b3cee9dc7c5fa0c15fb48fb9a0be36424c2f6d02b3ac.scope - libcontainer container 914c881a6547ef1b2a44b3cee9dc7c5fa0c15fb48fb9a0be36424c2f6d02b3ac. Sep 12 17:11:11.872554 kubelet[3226]: E0912 17:11:11.871389 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:11.872807 kubelet[3226]: W0912 17:11:11.872771 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:11.872968 kubelet[3226]: E0912 17:11:11.872942 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:11.877969 kubelet[3226]: E0912 17:11:11.877909 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:11.878169 kubelet[3226]: W0912 17:11:11.878142 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:11.878311 kubelet[3226]: E0912 17:11:11.878286 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:11.926898 kubelet[3226]: E0912 17:11:11.925015 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:11.926898 kubelet[3226]: W0912 17:11:11.925081 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:11.926898 kubelet[3226]: E0912 17:11:11.925118 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.077628 containerd[2023]: time="2025-09-12T17:11:12.076398179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jbw2x,Uid:3733b6a6-76f0-46c2-a577-a4b94dbe5edf,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:12.132459 kubelet[3226]: E0912 17:11:12.131858 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:12.138077 kubelet[3226]: E0912 17:11:12.137911 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.138485 kubelet[3226]: W0912 17:11:12.138288 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.138485 kubelet[3226]: E0912 17:11:12.138333 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.143856 kubelet[3226]: E0912 17:11:12.143361 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.144763 kubelet[3226]: W0912 17:11:12.143769 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.144996 kubelet[3226]: E0912 17:11:12.144954 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.146676 kubelet[3226]: E0912 17:11:12.146339 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.146676 kubelet[3226]: W0912 17:11:12.146595 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.146676 kubelet[3226]: E0912 17:11:12.146636 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.149858 kubelet[3226]: E0912 17:11:12.149047 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.149858 kubelet[3226]: W0912 17:11:12.149083 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.149858 kubelet[3226]: E0912 17:11:12.149116 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.151627 kubelet[3226]: E0912 17:11:12.151354 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.151627 kubelet[3226]: W0912 17:11:12.151563 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.152812 kubelet[3226]: E0912 17:11:12.151600 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.156787 kubelet[3226]: E0912 17:11:12.156750 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.157453 kubelet[3226]: W0912 17:11:12.156945 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.157453 kubelet[3226]: E0912 17:11:12.156988 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.159884 kubelet[3226]: E0912 17:11:12.158707 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.159884 kubelet[3226]: W0912 17:11:12.158741 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.159884 kubelet[3226]: E0912 17:11:12.158773 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.161641 containerd[2023]: time="2025-09-12T17:11:12.159579515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c646dbcf4-6vn54,Uid:30a4f0ba-830a-4f0b-8697-5fa46d82b8e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"914c881a6547ef1b2a44b3cee9dc7c5fa0c15fb48fb9a0be36424c2f6d02b3ac\"" Sep 12 17:11:12.161641 containerd[2023]: time="2025-09-12T17:11:12.159591551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:12.161641 containerd[2023]: time="2025-09-12T17:11:12.159693059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:12.162891 kubelet[3226]: E0912 17:11:12.162857 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.164599 kubelet[3226]: W0912 17:11:12.163909 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.164599 kubelet[3226]: E0912 17:11:12.163969 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.165623 kubelet[3226]: E0912 17:11:12.165573 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.166327 kubelet[3226]: W0912 17:11:12.165898 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.166327 kubelet[3226]: E0912 17:11:12.165941 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.167920 kubelet[3226]: E0912 17:11:12.167575 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.167920 kubelet[3226]: W0912 17:11:12.167609 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.167920 kubelet[3226]: E0912 17:11:12.167643 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.169480 containerd[2023]: time="2025-09-12T17:11:12.165781943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:12.170049 kubelet[3226]: E0912 17:11:12.169774 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.170049 kubelet[3226]: W0912 17:11:12.169807 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.170049 kubelet[3226]: E0912 17:11:12.169840 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.172050 kubelet[3226]: E0912 17:11:12.171382 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.172050 kubelet[3226]: W0912 17:11:12.171411 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.172050 kubelet[3226]: E0912 17:11:12.171504 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.173384 kubelet[3226]: E0912 17:11:12.172954 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.173384 kubelet[3226]: W0912 17:11:12.172988 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.173384 kubelet[3226]: E0912 17:11:12.173037 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.173678 containerd[2023]: time="2025-09-12T17:11:12.172751819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:12.175428 kubelet[3226]: E0912 17:11:12.174402 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.175428 kubelet[3226]: W0912 17:11:12.174483 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.175428 kubelet[3226]: E0912 17:11:12.174519 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.176702 containerd[2023]: time="2025-09-12T17:11:12.175994363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:11:12.177464 kubelet[3226]: E0912 17:11:12.177066 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.177464 kubelet[3226]: W0912 17:11:12.177101 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.177464 kubelet[3226]: E0912 17:11:12.177132 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.178188 kubelet[3226]: E0912 17:11:12.178031 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.180583 kubelet[3226]: W0912 17:11:12.178300 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.180583 kubelet[3226]: E0912 17:11:12.178336 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.184004 kubelet[3226]: E0912 17:11:12.183692 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.184004 kubelet[3226]: W0912 17:11:12.183743 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.184004 kubelet[3226]: E0912 17:11:12.183776 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.186169 kubelet[3226]: E0912 17:11:12.185983 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.186169 kubelet[3226]: W0912 17:11:12.186020 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.186169 kubelet[3226]: E0912 17:11:12.186055 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.188583 kubelet[3226]: E0912 17:11:12.188255 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.188583 kubelet[3226]: W0912 17:11:12.188319 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.188583 kubelet[3226]: E0912 17:11:12.188510 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.192097 kubelet[3226]: E0912 17:11:12.191595 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.192097 kubelet[3226]: W0912 17:11:12.191688 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.192097 kubelet[3226]: E0912 17:11:12.191738 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.196200 kubelet[3226]: E0912 17:11:12.195014 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.196200 kubelet[3226]: W0912 17:11:12.195268 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.196200 kubelet[3226]: E0912 17:11:12.195306 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.196735 kubelet[3226]: I0912 17:11:12.196490 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae5d657e-9e31-4c9f-9e66-064135056e24-varrun\") pod \"csi-node-driver-9nrhn\" (UID: \"ae5d657e-9e31-4c9f-9e66-064135056e24\") " pod="calico-system/csi-node-driver-9nrhn" Sep 12 17:11:12.199486 kubelet[3226]: E0912 17:11:12.199243 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.199929 kubelet[3226]: W0912 17:11:12.199288 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.200129 kubelet[3226]: E0912 17:11:12.199718 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.201768 kubelet[3226]: I0912 17:11:12.201195 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae5d657e-9e31-4c9f-9e66-064135056e24-socket-dir\") pod \"csi-node-driver-9nrhn\" (UID: \"ae5d657e-9e31-4c9f-9e66-064135056e24\") " pod="calico-system/csi-node-driver-9nrhn" Sep 12 17:11:12.204555 kubelet[3226]: E0912 17:11:12.203889 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.204555 kubelet[3226]: W0912 17:11:12.203954 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.204555 kubelet[3226]: E0912 17:11:12.204042 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.204555 kubelet[3226]: I0912 17:11:12.204348 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhbt\" (UniqueName: \"kubernetes.io/projected/ae5d657e-9e31-4c9f-9e66-064135056e24-kube-api-access-pqhbt\") pod \"csi-node-driver-9nrhn\" (UID: \"ae5d657e-9e31-4c9f-9e66-064135056e24\") " pod="calico-system/csi-node-driver-9nrhn" Sep 12 17:11:12.207742 kubelet[3226]: E0912 17:11:12.207101 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.207742 kubelet[3226]: W0912 17:11:12.207138 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.207742 kubelet[3226]: E0912 17:11:12.207207 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.209644 kubelet[3226]: E0912 17:11:12.209604 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.209976 kubelet[3226]: W0912 17:11:12.209823 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.210570 kubelet[3226]: E0912 17:11:12.210083 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.210928 kubelet[3226]: E0912 17:11:12.210900 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.211231 kubelet[3226]: W0912 17:11:12.211040 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.211740 kubelet[3226]: E0912 17:11:12.211424 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.212336 kubelet[3226]: E0912 17:11:12.211984 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.212336 kubelet[3226]: W0912 17:11:12.212029 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.212336 kubelet[3226]: E0912 17:11:12.212250 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.212336 kubelet[3226]: I0912 17:11:12.212299 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae5d657e-9e31-4c9f-9e66-064135056e24-kubelet-dir\") pod \"csi-node-driver-9nrhn\" (UID: \"ae5d657e-9e31-4c9f-9e66-064135056e24\") " pod="calico-system/csi-node-driver-9nrhn" Sep 12 17:11:12.214319 kubelet[3226]: E0912 17:11:12.213711 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.214319 kubelet[3226]: W0912 17:11:12.213747 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.214886 kubelet[3226]: E0912 17:11:12.214681 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.215655 kubelet[3226]: E0912 17:11:12.215140 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.215655 kubelet[3226]: W0912 17:11:12.215170 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.215655 kubelet[3226]: E0912 17:11:12.215219 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.216410 kubelet[3226]: E0912 17:11:12.216379 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.216769 kubelet[3226]: W0912 17:11:12.216614 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.216769 kubelet[3226]: E0912 17:11:12.216685 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.217197 kubelet[3226]: I0912 17:11:12.217167 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae5d657e-9e31-4c9f-9e66-064135056e24-registration-dir\") pod \"csi-node-driver-9nrhn\" (UID: \"ae5d657e-9e31-4c9f-9e66-064135056e24\") " pod="calico-system/csi-node-driver-9nrhn" Sep 12 17:11:12.222252 kubelet[3226]: E0912 17:11:12.221828 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.222252 kubelet[3226]: W0912 17:11:12.221863 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.222252 kubelet[3226]: E0912 17:11:12.221908 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.223283 kubelet[3226]: E0912 17:11:12.223093 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.223283 kubelet[3226]: W0912 17:11:12.223126 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.223283 kubelet[3226]: E0912 17:11:12.223174 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.225875 kubelet[3226]: E0912 17:11:12.225495 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.225875 kubelet[3226]: W0912 17:11:12.225531 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.225875 kubelet[3226]: E0912 17:11:12.225583 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.227826 kubelet[3226]: E0912 17:11:12.227500 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.227826 kubelet[3226]: W0912 17:11:12.227539 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.227826 kubelet[3226]: E0912 17:11:12.227572 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.229515 kubelet[3226]: E0912 17:11:12.228742 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.229515 kubelet[3226]: W0912 17:11:12.228775 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.229515 kubelet[3226]: E0912 17:11:12.228806 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.252635 systemd[1]: Started cri-containerd-4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07.scope - libcontainer container 4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07. Sep 12 17:11:12.326830 kubelet[3226]: E0912 17:11:12.326436 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.326830 kubelet[3226]: W0912 17:11:12.326497 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.326830 kubelet[3226]: E0912 17:11:12.326531 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.328477 kubelet[3226]: E0912 17:11:12.327636 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.329049 kubelet[3226]: W0912 17:11:12.328787 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.329049 kubelet[3226]: E0912 17:11:12.328851 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.331697 kubelet[3226]: E0912 17:11:12.331645 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.332040 kubelet[3226]: W0912 17:11:12.331856 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.332040 kubelet[3226]: E0912 17:11:12.331940 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.335185 kubelet[3226]: E0912 17:11:12.334671 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.335185 kubelet[3226]: W0912 17:11:12.334739 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.336103 kubelet[3226]: E0912 17:11:12.335942 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.336103 kubelet[3226]: W0912 17:11:12.335975 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.336901 kubelet[3226]: E0912 17:11:12.336680 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.336901 kubelet[3226]: E0912 17:11:12.336740 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.339346 kubelet[3226]: E0912 17:11:12.338869 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.339346 kubelet[3226]: W0912 17:11:12.338906 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.339914 kubelet[3226]: E0912 17:11:12.339814 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.342034 kubelet[3226]: E0912 17:11:12.342004 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.342266 kubelet[3226]: W0912 17:11:12.342189 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.342463 kubelet[3226]: E0912 17:11:12.342350 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.343079 kubelet[3226]: E0912 17:11:12.343046 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.343079 kubelet[3226]: W0912 17:11:12.343120 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.345336 kubelet[3226]: E0912 17:11:12.344027 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.345897 kubelet[3226]: E0912 17:11:12.345750 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.345897 kubelet[3226]: W0912 17:11:12.345780 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.346305 kubelet[3226]: E0912 17:11:12.346115 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.346650 kubelet[3226]: E0912 17:11:12.346623 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.347140 kubelet[3226]: W0912 17:11:12.346791 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.347910 kubelet[3226]: E0912 17:11:12.347260 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.349465 kubelet[3226]: E0912 17:11:12.349013 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.349465 kubelet[3226]: W0912 17:11:12.349071 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.351115 kubelet[3226]: E0912 17:11:12.350665 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.351115 kubelet[3226]: E0912 17:11:12.350869 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.351115 kubelet[3226]: W0912 17:11:12.350890 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.351948 kubelet[3226]: E0912 17:11:12.351314 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.351948 kubelet[3226]: E0912 17:11:12.351816 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.351948 kubelet[3226]: W0912 17:11:12.351841 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.354368 kubelet[3226]: E0912 17:11:12.353493 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.355193 kubelet[3226]: E0912 17:11:12.354688 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.355193 kubelet[3226]: W0912 17:11:12.354720 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.355844 kubelet[3226]: E0912 17:11:12.355809 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.356892 kubelet[3226]: E0912 17:11:12.356490 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.356892 kubelet[3226]: W0912 17:11:12.356523 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.356892 kubelet[3226]: E0912 17:11:12.356588 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.358091 kubelet[3226]: E0912 17:11:12.358050 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.358419 kubelet[3226]: W0912 17:11:12.358271 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.358419 kubelet[3226]: E0912 17:11:12.358353 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.360955 kubelet[3226]: E0912 17:11:12.360759 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.360955 kubelet[3226]: W0912 17:11:12.360793 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.360955 kubelet[3226]: E0912 17:11:12.360923 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.362333 kubelet[3226]: E0912 17:11:12.362036 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.362333 kubelet[3226]: W0912 17:11:12.362072 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.362333 kubelet[3226]: E0912 17:11:12.362144 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.363552 kubelet[3226]: E0912 17:11:12.363500 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.363896 kubelet[3226]: W0912 17:11:12.363753 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.363896 kubelet[3226]: E0912 17:11:12.363840 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.365725 kubelet[3226]: E0912 17:11:12.365670 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.365725 kubelet[3226]: W0912 17:11:12.365710 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.366137 kubelet[3226]: E0912 17:11:12.365938 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.367881 kubelet[3226]: E0912 17:11:12.367825 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.367881 kubelet[3226]: W0912 17:11:12.367869 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.368227 kubelet[3226]: E0912 17:11:12.367938 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.370044 kubelet[3226]: E0912 17:11:12.369979 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.370044 kubelet[3226]: W0912 17:11:12.370032 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.370735 kubelet[3226]: E0912 17:11:12.370311 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.371503 kubelet[3226]: E0912 17:11:12.371428 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.371503 kubelet[3226]: W0912 17:11:12.371496 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.371967 kubelet[3226]: E0912 17:11:12.371913 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.372924 kubelet[3226]: E0912 17:11:12.372874 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.372924 kubelet[3226]: W0912 17:11:12.372912 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.373991 kubelet[3226]: E0912 17:11:12.373384 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.374827 kubelet[3226]: E0912 17:11:12.374780 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.374827 kubelet[3226]: W0912 17:11:12.374817 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.375031 kubelet[3226]: E0912 17:11:12.374852 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.399314 kubelet[3226]: E0912 17:11:12.399261 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:12.399314 kubelet[3226]: W0912 17:11:12.399301 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:12.399558 kubelet[3226]: E0912 17:11:12.399336 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:12.436361 containerd[2023]: time="2025-09-12T17:11:12.436124569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jbw2x,Uid:3733b6a6-76f0-46c2-a577-a4b94dbe5edf,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07\"" Sep 12 17:11:12.482361 systemd[1]: run-containerd-runc-k8s.io-914c881a6547ef1b2a44b3cee9dc7c5fa0c15fb48fb9a0be36424c2f6d02b3ac-runc.xDDTrB.mount: Deactivated successfully. Sep 12 17:11:13.460790 kubelet[3226]: E0912 17:11:13.460081 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:13.734370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016832712.mount: Deactivated successfully. Sep 12 17:11:15.116696 containerd[2023]: time="2025-09-12T17:11:15.116610398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:15.118764 containerd[2023]: time="2025-09-12T17:11:15.118659518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:11:15.119786 containerd[2023]: time="2025-09-12T17:11:15.119720306Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:15.130947 containerd[2023]: time="2025-09-12T17:11:15.130717406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:15.134524 containerd[2023]: time="2025-09-12T17:11:15.132957566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.956888179s" Sep 12 17:11:15.134524 containerd[2023]: time="2025-09-12T17:11:15.134275166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:11:15.140005 containerd[2023]: time="2025-09-12T17:11:15.139673270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:11:15.182835 containerd[2023]: time="2025-09-12T17:11:15.182151746Z" level=info msg="CreateContainer within sandbox \"914c881a6547ef1b2a44b3cee9dc7c5fa0c15fb48fb9a0be36424c2f6d02b3ac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:11:15.238222 containerd[2023]: time="2025-09-12T17:11:15.238049199Z" level=info msg="CreateContainer within sandbox \"914c881a6547ef1b2a44b3cee9dc7c5fa0c15fb48fb9a0be36424c2f6d02b3ac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"032e93529ce8be42ee1af73e09478861b36b743ed09fef0f8f4ebabcd9b35a42\"" Sep 12 17:11:15.241050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150906059.mount: Deactivated successfully. Sep 12 17:11:15.245490 containerd[2023]: time="2025-09-12T17:11:15.242346195Z" level=info msg="StartContainer for \"032e93529ce8be42ee1af73e09478861b36b743ed09fef0f8f4ebabcd9b35a42\"" Sep 12 17:11:15.320819 systemd[1]: Started cri-containerd-032e93529ce8be42ee1af73e09478861b36b743ed09fef0f8f4ebabcd9b35a42.scope - libcontainer container 032e93529ce8be42ee1af73e09478861b36b743ed09fef0f8f4ebabcd9b35a42. Sep 12 17:11:15.411253 containerd[2023]: time="2025-09-12T17:11:15.410574327Z" level=info msg="StartContainer for \"032e93529ce8be42ee1af73e09478861b36b743ed09fef0f8f4ebabcd9b35a42\" returns successfully" Sep 12 17:11:15.471763 kubelet[3226]: E0912 17:11:15.471184 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:15.714086 kubelet[3226]: E0912 17:11:15.713913 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.714086 kubelet[3226]: W0912 17:11:15.713979 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.715873 kubelet[3226]: E0912 17:11:15.714015 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.716510 kubelet[3226]: E0912 17:11:15.716469 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.716626 kubelet[3226]: W0912 17:11:15.716509 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.716626 kubelet[3226]: E0912 17:11:15.716584 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.717190 kubelet[3226]: E0912 17:11:15.717133 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.717333 kubelet[3226]: W0912 17:11:15.717174 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.717403 kubelet[3226]: E0912 17:11:15.717338 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.719952 kubelet[3226]: E0912 17:11:15.719899 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.719952 kubelet[3226]: W0912 17:11:15.719938 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.720165 kubelet[3226]: E0912 17:11:15.719971 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.720554 kubelet[3226]: E0912 17:11:15.720510 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.720554 kubelet[3226]: W0912 17:11:15.720544 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.720736 kubelet[3226]: E0912 17:11:15.720574 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.723106 kubelet[3226]: E0912 17:11:15.722563 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.723106 kubelet[3226]: W0912 17:11:15.722720 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.723106 kubelet[3226]: E0912 17:11:15.722866 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.723106 kubelet[3226]: I0912 17:11:15.722795 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c646dbcf4-6vn54" podStartSLOduration=1.760236014 podStartE2EDuration="4.722776589s" podCreationTimestamp="2025-09-12 17:11:11 +0000 UTC" firstStartedPulling="2025-09-12 17:11:12.174840395 +0000 UTC m=+27.097971003" lastFinishedPulling="2025-09-12 17:11:15.137380454 +0000 UTC m=+30.060511578" observedRunningTime="2025-09-12 17:11:15.721822169 +0000 UTC m=+30.644952813" watchObservedRunningTime="2025-09-12 17:11:15.722776589 +0000 UTC m=+30.645907221" Sep 12 17:11:15.724383 kubelet[3226]: E0912 17:11:15.724333 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.724383 kubelet[3226]: W0912 17:11:15.724374 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.724814 kubelet[3226]: E0912 17:11:15.724406 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.726377 kubelet[3226]: E0912 17:11:15.726312 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.726528 kubelet[3226]: W0912 17:11:15.726484 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.726528 kubelet[3226]: E0912 17:11:15.726521 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.727930 kubelet[3226]: E0912 17:11:15.727875 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.727930 kubelet[3226]: W0912 17:11:15.727927 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.728160 kubelet[3226]: E0912 17:11:15.727965 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.729302 kubelet[3226]: E0912 17:11:15.729246 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.729477 kubelet[3226]: W0912 17:11:15.729288 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.729477 kubelet[3226]: E0912 17:11:15.729368 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.732160 kubelet[3226]: E0912 17:11:15.731422 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.732160 kubelet[3226]: W0912 17:11:15.731664 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.732160 kubelet[3226]: E0912 17:11:15.731736 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.732868 kubelet[3226]: E0912 17:11:15.732823 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.732868 kubelet[3226]: W0912 17:11:15.732860 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.732993 kubelet[3226]: E0912 17:11:15.732893 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.734056 kubelet[3226]: E0912 17:11:15.734001 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.734056 kubelet[3226]: W0912 17:11:15.734045 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.734256 kubelet[3226]: E0912 17:11:15.734077 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.735802 kubelet[3226]: E0912 17:11:15.735748 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.735802 kubelet[3226]: W0912 17:11:15.735789 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.736069 kubelet[3226]: E0912 17:11:15.735826 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.737270 kubelet[3226]: E0912 17:11:15.737211 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.737270 kubelet[3226]: W0912 17:11:15.737254 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.737547 kubelet[3226]: E0912 17:11:15.737289 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.775990 kubelet[3226]: E0912 17:11:15.775818 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.775990 kubelet[3226]: W0912 17:11:15.775876 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.775990 kubelet[3226]: E0912 17:11:15.775912 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.776678 kubelet[3226]: E0912 17:11:15.776630 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.776678 kubelet[3226]: W0912 17:11:15.776667 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.777145 kubelet[3226]: E0912 17:11:15.776712 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.779731 kubelet[3226]: E0912 17:11:15.779675 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.779731 kubelet[3226]: W0912 17:11:15.779718 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.779959 kubelet[3226]: E0912 17:11:15.779768 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.780342 kubelet[3226]: E0912 17:11:15.780304 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.780342 kubelet[3226]: W0912 17:11:15.780335 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.781273 kubelet[3226]: E0912 17:11:15.781201 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.781762 kubelet[3226]: E0912 17:11:15.781716 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.781762 kubelet[3226]: W0912 17:11:15.781752 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.782262 kubelet[3226]: E0912 17:11:15.781971 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.782544 kubelet[3226]: E0912 17:11:15.782502 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.782544 kubelet[3226]: W0912 17:11:15.782537 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.783827 kubelet[3226]: E0912 17:11:15.783771 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.783999 kubelet[3226]: E0912 17:11:15.783859 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.783999 kubelet[3226]: W0912 17:11:15.783880 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.784373 kubelet[3226]: E0912 17:11:15.784104 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.784659 kubelet[3226]: E0912 17:11:15.784615 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.784659 kubelet[3226]: W0912 17:11:15.784652 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.785826 kubelet[3226]: E0912 17:11:15.785054 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.786216 kubelet[3226]: E0912 17:11:15.786172 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.786216 kubelet[3226]: W0912 17:11:15.786209 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.786390 kubelet[3226]: E0912 17:11:15.786252 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.787177 kubelet[3226]: E0912 17:11:15.787124 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.787177 kubelet[3226]: W0912 17:11:15.787167 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.787546 kubelet[3226]: E0912 17:11:15.787425 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.787932 kubelet[3226]: E0912 17:11:15.787892 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.787932 kubelet[3226]: W0912 17:11:15.787925 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.789757 kubelet[3226]: E0912 17:11:15.789588 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.790794 kubelet[3226]: E0912 17:11:15.790748 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.790794 kubelet[3226]: W0912 17:11:15.790788 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.790994 kubelet[3226]: E0912 17:11:15.790912 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.791429 kubelet[3226]: E0912 17:11:15.791389 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.791429 kubelet[3226]: W0912 17:11:15.791421 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.794697 kubelet[3226]: E0912 17:11:15.794640 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.794697 kubelet[3226]: W0912 17:11:15.794683 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.796858 kubelet[3226]: E0912 17:11:15.796802 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.796858 kubelet[3226]: W0912 17:11:15.796843 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.797087 kubelet[3226]: E0912 17:11:15.796879 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.797633 kubelet[3226]: E0912 17:11:15.797587 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.797633 kubelet[3226]: W0912 17:11:15.797622 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.797778 kubelet[3226]: E0912 17:11:15.797651 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.797778 kubelet[3226]: E0912 17:11:15.797694 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.800682 kubelet[3226]: E0912 17:11:15.800621 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.800682 kubelet[3226]: W0912 17:11:15.800666 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.800904 kubelet[3226]: E0912 17:11:15.800701 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.800904 kubelet[3226]: E0912 17:11:15.800747 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.803684 kubelet[3226]: E0912 17:11:15.803624 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.803684 kubelet[3226]: W0912 17:11:15.803670 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.803903 kubelet[3226]: E0912 17:11:15.803704 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.628301 kubelet[3226]: I0912 17:11:16.627764 3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:16.646001 kubelet[3226]: E0912 17:11:16.645958 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.646001 kubelet[3226]: W0912 17:11:16.645996 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.646524 kubelet[3226]: E0912 17:11:16.646030 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.647234 kubelet[3226]: E0912 17:11:16.647149 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.647647 kubelet[3226]: W0912 17:11:16.647253 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.647728 kubelet[3226]: E0912 17:11:16.647666 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.648815 kubelet[3226]: E0912 17:11:16.648772 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.649537 kubelet[3226]: W0912 17:11:16.649398 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.649766 kubelet[3226]: E0912 17:11:16.649681 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.653420 kubelet[3226]: E0912 17:11:16.652613 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.653420 kubelet[3226]: W0912 17:11:16.652648 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.653420 kubelet[3226]: E0912 17:11:16.652680 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.654038 kubelet[3226]: E0912 17:11:16.653815 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.654038 kubelet[3226]: W0912 17:11:16.653859 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.654038 kubelet[3226]: E0912 17:11:16.653890 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.655276 kubelet[3226]: E0912 17:11:16.654917 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.655276 kubelet[3226]: W0912 17:11:16.654942 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.655276 kubelet[3226]: E0912 17:11:16.654969 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.655900 kubelet[3226]: E0912 17:11:16.655601 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.655900 kubelet[3226]: W0912 17:11:16.655623 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.655900 kubelet[3226]: E0912 17:11:16.655648 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.656361 kubelet[3226]: E0912 17:11:16.656337 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.656576 kubelet[3226]: W0912 17:11:16.656474 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.656576 kubelet[3226]: E0912 17:11:16.656532 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.657388 kubelet[3226]: E0912 17:11:16.657364 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.657728 kubelet[3226]: W0912 17:11:16.657602 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.657728 kubelet[3226]: E0912 17:11:16.657638 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.658486 kubelet[3226]: E0912 17:11:16.658320 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.658486 kubelet[3226]: W0912 17:11:16.658345 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.658486 kubelet[3226]: E0912 17:11:16.658371 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.659190 kubelet[3226]: E0912 17:11:16.658996 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.659190 kubelet[3226]: W0912 17:11:16.659018 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.659190 kubelet[3226]: E0912 17:11:16.659042 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.659903 kubelet[3226]: E0912 17:11:16.659741 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.659903 kubelet[3226]: W0912 17:11:16.659767 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.659903 kubelet[3226]: E0912 17:11:16.659792 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.660702 kubelet[3226]: E0912 17:11:16.660467 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.660702 kubelet[3226]: W0912 17:11:16.660515 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.660702 kubelet[3226]: E0912 17:11:16.660540 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.661254 kubelet[3226]: E0912 17:11:16.661127 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.661254 kubelet[3226]: W0912 17:11:16.661152 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.661254 kubelet[3226]: E0912 17:11:16.661176 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.661929 kubelet[3226]: E0912 17:11:16.661806 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.661929 kubelet[3226]: W0912 17:11:16.661829 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.661929 kubelet[3226]: E0912 17:11:16.661852 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.686773 kubelet[3226]: E0912 17:11:16.686737 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.687146 kubelet[3226]: W0912 17:11:16.686960 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.687146 kubelet[3226]: E0912 17:11:16.686999 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.687912 kubelet[3226]: E0912 17:11:16.687717 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.687912 kubelet[3226]: W0912 17:11:16.687747 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.687912 kubelet[3226]: E0912 17:11:16.687793 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.688308 kubelet[3226]: E0912 17:11:16.688172 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.688308 kubelet[3226]: W0912 17:11:16.688208 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.688308 kubelet[3226]: E0912 17:11:16.688250 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.688894 kubelet[3226]: E0912 17:11:16.688858 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.688984 kubelet[3226]: W0912 17:11:16.688897 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.688984 kubelet[3226]: E0912 17:11:16.688944 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.689502 kubelet[3226]: E0912 17:11:16.689438 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.689598 kubelet[3226]: W0912 17:11:16.689502 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.689598 kubelet[3226]: E0912 17:11:16.689635 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.689922 kubelet[3226]: E0912 17:11:16.689892 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.690005 kubelet[3226]: W0912 17:11:16.689922 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.690005 kubelet[3226]: E0912 17:11:16.690136 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.690529 kubelet[3226]: E0912 17:11:16.690417 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.690529 kubelet[3226]: W0912 17:11:16.690473 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.690834 kubelet[3226]: E0912 17:11:16.690644 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.690987 kubelet[3226]: E0912 17:11:16.690957 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.691075 kubelet[3226]: W0912 17:11:16.690987 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.691075 kubelet[3226]: E0912 17:11:16.691026 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.692143 kubelet[3226]: E0912 17:11:16.691997 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.692143 kubelet[3226]: W0912 17:11:16.692029 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.692143 kubelet[3226]: E0912 17:11:16.692086 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.693271 kubelet[3226]: E0912 17:11:16.692962 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.693271 kubelet[3226]: W0912 17:11:16.692992 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.693271 kubelet[3226]: E0912 17:11:16.693113 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.694124 kubelet[3226]: E0912 17:11:16.693719 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.694124 kubelet[3226]: W0912 17:11:16.693750 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.694586 kubelet[3226]: E0912 17:11:16.694288 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.694586 kubelet[3226]: E0912 17:11:16.694402 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.694586 kubelet[3226]: W0912 17:11:16.694417 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.694920 kubelet[3226]: E0912 17:11:16.694689 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.695014 kubelet[3226]: E0912 17:11:16.694971 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.695155 kubelet[3226]: W0912 17:11:16.695009 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.695155 kubelet[3226]: E0912 17:11:16.695064 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.695542 kubelet[3226]: E0912 17:11:16.695482 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.695542 kubelet[3226]: W0912 17:11:16.695527 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.695819 kubelet[3226]: E0912 17:11:16.695568 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.696071 kubelet[3226]: E0912 17:11:16.696042 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.696148 kubelet[3226]: W0912 17:11:16.696072 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.696148 kubelet[3226]: E0912 17:11:16.696109 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.697129 kubelet[3226]: E0912 17:11:16.696732 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.697129 kubelet[3226]: W0912 17:11:16.696756 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.697129 kubelet[3226]: E0912 17:11:16.696792 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.697129 kubelet[3226]: E0912 17:11:16.697131 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.697428 kubelet[3226]: W0912 17:11:16.697149 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.697428 kubelet[3226]: E0912 17:11:16.697185 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.697605 kubelet[3226]: E0912 17:11:16.697509 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:16.697605 kubelet[3226]: W0912 17:11:16.697526 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:16.697605 kubelet[3226]: E0912 17:11:16.697547 3226 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:16.932507 containerd[2023]: time="2025-09-12T17:11:16.931666219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:16.935338 containerd[2023]: time="2025-09-12T17:11:16.934860367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:11:16.937176 containerd[2023]: time="2025-09-12T17:11:16.936718387Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:16.940429 containerd[2023]: time="2025-09-12T17:11:16.940347655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:16.941816 containerd[2023]: time="2025-09-12T17:11:16.941765779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.802001957s" Sep 12 17:11:16.942126 containerd[2023]: time="2025-09-12T17:11:16.941970427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:11:16.950314 containerd[2023]: time="2025-09-12T17:11:16.950018035Z" level=info msg="CreateContainer within sandbox \"4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:11:16.977499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544452219.mount: Deactivated successfully. Sep 12 17:11:16.982430 containerd[2023]: time="2025-09-12T17:11:16.982202443Z" level=info msg="CreateContainer within sandbox \"4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922\"" Sep 12 17:11:16.983948 containerd[2023]: time="2025-09-12T17:11:16.983795143Z" level=info msg="StartContainer for \"2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922\"" Sep 12 17:11:17.055810 systemd[1]: Started cri-containerd-2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922.scope - libcontainer container 2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922. Sep 12 17:11:17.115493 containerd[2023]: time="2025-09-12T17:11:17.114542620Z" level=info msg="StartContainer for \"2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922\" returns successfully" Sep 12 17:11:17.154019 systemd[1]: cri-containerd-2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922.scope: Deactivated successfully. Sep 12 17:11:17.204307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922-rootfs.mount: Deactivated successfully. Sep 12 17:11:17.461541 kubelet[3226]: E0912 17:11:17.460827 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:17.550462 containerd[2023]: time="2025-09-12T17:11:17.550321674Z" level=info msg="shim disconnected" id=2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922 namespace=k8s.io Sep 12 17:11:17.550947 containerd[2023]: time="2025-09-12T17:11:17.550692126Z" level=warning msg="cleaning up after shim disconnected" id=2e5a185273564adb60b11d1d97adef9cc554da5c5e1609a6ed9ff4de4ccd6922 namespace=k8s.io Sep 12 17:11:17.550947 containerd[2023]: time="2025-09-12T17:11:17.550726194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:17.636821 containerd[2023]: time="2025-09-12T17:11:17.636756234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:11:19.466360 kubelet[3226]: E0912 17:11:19.466218 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:21.389620 containerd[2023]: time="2025-09-12T17:11:21.389539233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:21.391142 containerd[2023]: time="2025-09-12T17:11:21.391086957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:11:21.392963 containerd[2023]: time="2025-09-12T17:11:21.392293617Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:21.396969 containerd[2023]: time="2025-09-12T17:11:21.396882321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:21.399485 containerd[2023]: time="2025-09-12T17:11:21.399384621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.762559159s" Sep 12 17:11:21.400162 containerd[2023]: time="2025-09-12T17:11:21.400062117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:11:21.405529 containerd[2023]: time="2025-09-12T17:11:21.405464133Z" level=info msg="CreateContainer within sandbox \"4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:11:21.428635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751551406.mount: Deactivated successfully. Sep 12 17:11:21.435002 containerd[2023]: time="2025-09-12T17:11:21.434926761Z" level=info msg="CreateContainer within sandbox \"4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7\"" Sep 12 17:11:21.436302 containerd[2023]: time="2025-09-12T17:11:21.436106625Z" level=info msg="StartContainer for \"454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7\"" Sep 12 17:11:21.465469 kubelet[3226]: E0912 17:11:21.461765 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:21.503925 systemd[1]: Started cri-containerd-454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7.scope - libcontainer container 454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7. Sep 12 17:11:21.568862 containerd[2023]: time="2025-09-12T17:11:21.568286902Z" level=info msg="StartContainer for \"454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7\" returns successfully" Sep 12 17:11:21.862552 kubelet[3226]: I0912 17:11:21.862173 3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:22.685486 containerd[2023]: time="2025-09-12T17:11:22.685388940Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:11:22.690902 systemd[1]: cri-containerd-454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7.scope: Deactivated successfully. Sep 12 17:11:22.737940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7-rootfs.mount: Deactivated successfully. Sep 12 17:11:22.769745 kubelet[3226]: I0912 17:11:22.769694 3226 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:11:22.847026 systemd[1]: Created slice kubepods-burstable-pod586bc779_7033_4130_9b97_e99099aaf59c.slice - libcontainer container kubepods-burstable-pod586bc779_7033_4130_9b97_e99099aaf59c.slice. Sep 12 17:11:22.868620 systemd[1]: Created slice kubepods-burstable-pod6229e9e2_94b6_4d89_8229_2b5bbe16089b.slice - libcontainer container kubepods-burstable-pod6229e9e2_94b6_4d89_8229_2b5bbe16089b.slice. Sep 12 17:11:22.905184 systemd[1]: Created slice kubepods-besteffort-pod175bb5e7_5b87_45af_af71_37ac118306c2.slice - libcontainer container kubepods-besteffort-pod175bb5e7_5b87_45af_af71_37ac118306c2.slice. Sep 12 17:11:22.941382 kubelet[3226]: I0912 17:11:22.941217 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/175bb5e7-5b87-45af-af71-37ac118306c2-tigera-ca-bundle\") pod \"calico-kube-controllers-8dc95788b-bgx5f\" (UID: \"175bb5e7-5b87-45af-af71-37ac118306c2\") " pod="calico-system/calico-kube-controllers-8dc95788b-bgx5f" Sep 12 17:11:22.941382 kubelet[3226]: I0912 17:11:22.941303 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ctj\" (UniqueName: \"kubernetes.io/projected/586bc779-7033-4130-9b97-e99099aaf59c-kube-api-access-45ctj\") pod \"coredns-668d6bf9bc-gdmfq\" (UID: \"586bc779-7033-4130-9b97-e99099aaf59c\") " pod="kube-system/coredns-668d6bf9bc-gdmfq" Sep 12 17:11:22.941382 kubelet[3226]: I0912 17:11:22.941354 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6229e9e2-94b6-4d89-8229-2b5bbe16089b-config-volume\") pod \"coredns-668d6bf9bc-7dhd5\" (UID: \"6229e9e2-94b6-4d89-8229-2b5bbe16089b\") " pod="kube-system/coredns-668d6bf9bc-7dhd5" Sep 12 17:11:22.942857 kubelet[3226]: I0912 17:11:22.941420 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgsv\" (UniqueName: \"kubernetes.io/projected/175bb5e7-5b87-45af-af71-37ac118306c2-kube-api-access-bwgsv\") pod \"calico-kube-controllers-8dc95788b-bgx5f\" (UID: \"175bb5e7-5b87-45af-af71-37ac118306c2\") " pod="calico-system/calico-kube-controllers-8dc95788b-bgx5f" Sep 12 17:11:22.942857 kubelet[3226]: I0912 17:11:22.941495 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql85c\" (UniqueName: \"kubernetes.io/projected/345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0-kube-api-access-ql85c\") pod \"calico-apiserver-647977b4b6-44ncj\" (UID: \"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0\") " pod="calico-apiserver/calico-apiserver-647977b4b6-44ncj" Sep 12 17:11:22.942857 kubelet[3226]: I0912 17:11:22.941546 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/586bc779-7033-4130-9b97-e99099aaf59c-config-volume\") pod \"coredns-668d6bf9bc-gdmfq\" (UID: \"586bc779-7033-4130-9b97-e99099aaf59c\") " pod="kube-system/coredns-668d6bf9bc-gdmfq" Sep 12 17:11:22.942857 kubelet[3226]: I0912 17:11:22.941582 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp7z8\" (UniqueName: \"kubernetes.io/projected/6229e9e2-94b6-4d89-8229-2b5bbe16089b-kube-api-access-sp7z8\") pod \"coredns-668d6bf9bc-7dhd5\" (UID: \"6229e9e2-94b6-4d89-8229-2b5bbe16089b\") " pod="kube-system/coredns-668d6bf9bc-7dhd5" Sep 12 17:11:22.942857 kubelet[3226]: I0912 17:11:22.941640 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0-calico-apiserver-certs\") pod \"calico-apiserver-647977b4b6-44ncj\" (UID: \"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0\") " pod="calico-apiserver/calico-apiserver-647977b4b6-44ncj" Sep 12 17:11:22.944933 systemd[1]: Created slice kubepods-besteffort-pod5ba4c09f_7a52_43c3_843a_207b43648510.slice - libcontainer container kubepods-besteffort-pod5ba4c09f_7a52_43c3_843a_207b43648510.slice. Sep 12 17:11:22.972573 kubelet[3226]: W0912 17:11:22.971168 3226 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-22-10" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-10' and this object Sep 12 17:11:22.972573 kubelet[3226]: E0912 17:11:22.971255 3226 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-22-10\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-10' and this object" logger="UnhandledError" Sep 12 17:11:22.972573 kubelet[3226]: W0912 17:11:22.971355 3226 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-22-10" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-10' and this object Sep 12 17:11:22.972573 kubelet[3226]: E0912 17:11:22.971382 3226 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-22-10\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-10' and this object" logger="UnhandledError" Sep 12 17:11:22.972573 kubelet[3226]: W0912 17:11:22.971499 3226 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-22-10" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-10' and this object Sep 12 17:11:22.972951 kubelet[3226]: E0912 17:11:22.971529 3226 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-22-10\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-22-10' and this object" logger="UnhandledError" Sep 12 17:11:22.980715 systemd[1]: Created slice kubepods-besteffort-pod345c245e_fb46_4bd0_8ae8_b1e67cf9cdc0.slice - libcontainer container kubepods-besteffort-pod345c245e_fb46_4bd0_8ae8_b1e67cf9cdc0.slice. Sep 12 17:11:23.001800 systemd[1]: Created slice kubepods-besteffort-pod068a7ef4_ec44_4071_aa63_64c2517f0138.slice - libcontainer container kubepods-besteffort-pod068a7ef4_ec44_4071_aa63_64c2517f0138.slice. Sep 12 17:11:23.018775 systemd[1]: Created slice kubepods-besteffort-pod94f00bb7_79a9_4162_8a58_e6291343a943.slice - libcontainer container kubepods-besteffort-pod94f00bb7_79a9_4162_8a58_e6291343a943.slice. Sep 12 17:11:23.042653 kubelet[3226]: I0912 17:11:23.042608 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rll5m\" (UniqueName: \"kubernetes.io/projected/94f00bb7-79a9-4162-8a58-e6291343a943-kube-api-access-rll5m\") pod \"goldmane-54d579b49d-vszm6\" (UID: \"94f00bb7-79a9-4162-8a58-e6291343a943\") " pod="calico-system/goldmane-54d579b49d-vszm6" Sep 12 17:11:23.042879 kubelet[3226]: I0912 17:11:23.042837 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/94f00bb7-79a9-4162-8a58-e6291343a943-goldmane-key-pair\") pod \"goldmane-54d579b49d-vszm6\" (UID: \"94f00bb7-79a9-4162-8a58-e6291343a943\") " pod="calico-system/goldmane-54d579b49d-vszm6" Sep 12 17:11:23.043045 kubelet[3226]: I0912 17:11:23.043019 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ba4c09f-7a52-43c3-843a-207b43648510-calico-apiserver-certs\") pod \"calico-apiserver-647977b4b6-hgchj\" (UID: \"5ba4c09f-7a52-43c3-843a-207b43648510\") " pod="calico-apiserver/calico-apiserver-647977b4b6-hgchj" Sep 12 17:11:23.043330 kubelet[3226]: I0912 17:11:23.043187 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxjf\" (UniqueName: \"kubernetes.io/projected/068a7ef4-ec44-4071-aa63-64c2517f0138-kube-api-access-jqxjf\") pod \"whisker-57b5cb98ff-kxfbh\" (UID: \"068a7ef4-ec44-4071-aa63-64c2517f0138\") " pod="calico-system/whisker-57b5cb98ff-kxfbh" Sep 12 17:11:23.043330 kubelet[3226]: I0912 17:11:23.043285 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94f00bb7-79a9-4162-8a58-e6291343a943-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-vszm6\" (UID: \"94f00bb7-79a9-4162-8a58-e6291343a943\") " pod="calico-system/goldmane-54d579b49d-vszm6" Sep 12 17:11:23.044781 kubelet[3226]: I0912 17:11:23.044577 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f00bb7-79a9-4162-8a58-e6291343a943-config\") pod \"goldmane-54d579b49d-vszm6\" (UID: \"94f00bb7-79a9-4162-8a58-e6291343a943\") " pod="calico-system/goldmane-54d579b49d-vszm6" Sep 12 17:11:23.049836 kubelet[3226]: I0912 17:11:23.049568 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88lv\" (UniqueName: \"kubernetes.io/projected/5ba4c09f-7a52-43c3-843a-207b43648510-kube-api-access-c88lv\") pod \"calico-apiserver-647977b4b6-hgchj\" (UID: \"5ba4c09f-7a52-43c3-843a-207b43648510\") " pod="calico-apiserver/calico-apiserver-647977b4b6-hgchj" Sep 12 17:11:23.049836 kubelet[3226]: I0912 17:11:23.049700 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-backend-key-pair\") pod \"whisker-57b5cb98ff-kxfbh\" (UID: \"068a7ef4-ec44-4071-aa63-64c2517f0138\") " pod="calico-system/whisker-57b5cb98ff-kxfbh" Sep 12 17:11:23.049836 kubelet[3226]: I0912 17:11:23.049773 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-ca-bundle\") pod \"whisker-57b5cb98ff-kxfbh\" (UID: \"068a7ef4-ec44-4071-aa63-64c2517f0138\") " pod="calico-system/whisker-57b5cb98ff-kxfbh" Sep 12 17:11:23.174160 containerd[2023]: time="2025-09-12T17:11:23.173786086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdmfq,Uid:586bc779-7033-4130-9b97-e99099aaf59c,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:23.182255 containerd[2023]: time="2025-09-12T17:11:23.182180074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dhd5,Uid:6229e9e2-94b6-4d89-8229-2b5bbe16089b,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:23.225838 containerd[2023]: time="2025-09-12T17:11:23.224015638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8dc95788b-bgx5f,Uid:175bb5e7-5b87-45af-af71-37ac118306c2,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:23.231161 containerd[2023]: time="2025-09-12T17:11:23.230106070Z" level=info msg="shim disconnected" id=454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7 namespace=k8s.io Sep 12 17:11:23.231161 containerd[2023]: time="2025-09-12T17:11:23.230176930Z" level=warning msg="cleaning up after shim disconnected" id=454495a81443ca8b9b87a72156444856cb987381e2646e69d1e2c939bd08d6a7 namespace=k8s.io Sep 12 17:11:23.231161 containerd[2023]: time="2025-09-12T17:11:23.230207758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:23.269633 containerd[2023]: time="2025-09-12T17:11:23.268984762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-hgchj,Uid:5ba4c09f-7a52-43c3-843a-207b43648510,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:23.297587 containerd[2023]: time="2025-09-12T17:11:23.297518975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-44ncj,Uid:345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:23.481579 systemd[1]: Created slice kubepods-besteffort-podae5d657e_9e31_4c9f_9e66_064135056e24.slice - libcontainer container kubepods-besteffort-podae5d657e_9e31_4c9f_9e66_064135056e24.slice. Sep 12 17:11:23.489867 containerd[2023]: time="2025-09-12T17:11:23.489686364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nrhn,Uid:ae5d657e-9e31-4c9f-9e66-064135056e24,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:23.634244 containerd[2023]: time="2025-09-12T17:11:23.634173384Z" level=error msg="Failed to destroy network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.637068 containerd[2023]: time="2025-09-12T17:11:23.637004328Z" level=error msg="encountered an error cleaning up failed sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.637374 containerd[2023]: time="2025-09-12T17:11:23.637297224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdmfq,Uid:586bc779-7033-4130-9b97-e99099aaf59c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.637787 containerd[2023]: time="2025-09-12T17:11:23.637728852Z" level=error msg="Failed to destroy network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.638294 kubelet[3226]: E0912 17:11:23.638229 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.638395 kubelet[3226]: E0912 17:11:23.638327 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gdmfq" Sep 12 17:11:23.638395 kubelet[3226]: E0912 17:11:23.638366 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gdmfq" Sep 12 17:11:23.638880 containerd[2023]: time="2025-09-12T17:11:23.638822688Z" level=error msg="encountered an error cleaning up failed sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.639728 kubelet[3226]: E0912 17:11:23.638435 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gdmfq_kube-system(586bc779-7033-4130-9b97-e99099aaf59c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gdmfq_kube-system(586bc779-7033-4130-9b97-e99099aaf59c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gdmfq" podUID="586bc779-7033-4130-9b97-e99099aaf59c" Sep 12 17:11:23.640559 containerd[2023]: time="2025-09-12T17:11:23.640214580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8dc95788b-bgx5f,Uid:175bb5e7-5b87-45af-af71-37ac118306c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.640868 kubelet[3226]: E0912 17:11:23.640590 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.640868 kubelet[3226]: E0912 17:11:23.640666 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8dc95788b-bgx5f" Sep 12 17:11:23.640868 kubelet[3226]: E0912 17:11:23.640723 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8dc95788b-bgx5f" Sep 12 17:11:23.642581 kubelet[3226]: E0912 17:11:23.641502 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8dc95788b-bgx5f_calico-system(175bb5e7-5b87-45af-af71-37ac118306c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8dc95788b-bgx5f_calico-system(175bb5e7-5b87-45af-af71-37ac118306c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8dc95788b-bgx5f" podUID="175bb5e7-5b87-45af-af71-37ac118306c2" Sep 12 17:11:23.664902 containerd[2023]: time="2025-09-12T17:11:23.664800960Z" level=error msg="Failed to destroy network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.669499 containerd[2023]: time="2025-09-12T17:11:23.669394884Z" level=error msg="encountered an error cleaning up failed sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.674082 containerd[2023]: time="2025-09-12T17:11:23.672389532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dhd5,Uid:6229e9e2-94b6-4d89-8229-2b5bbe16089b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.675532 kubelet[3226]: E0912 17:11:23.675013 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.675532 kubelet[3226]: E0912 17:11:23.675184 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7dhd5" Sep 12 17:11:23.675532 kubelet[3226]: E0912 17:11:23.675224 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7dhd5" Sep 12 17:11:23.675953 kubelet[3226]: E0912 17:11:23.675324 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7dhd5_kube-system(6229e9e2-94b6-4d89-8229-2b5bbe16089b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7dhd5_kube-system(6229e9e2-94b6-4d89-8229-2b5bbe16089b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7dhd5" podUID="6229e9e2-94b6-4d89-8229-2b5bbe16089b" Sep 12 17:11:23.680027 containerd[2023]: time="2025-09-12T17:11:23.679621008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:11:23.693400 kubelet[3226]: I0912 17:11:23.693336 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:23.702180 containerd[2023]: time="2025-09-12T17:11:23.701981761Z" level=error msg="Failed to destroy network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.707025 containerd[2023]: time="2025-09-12T17:11:23.705879301Z" level=info msg="StopPodSandbox for \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\"" Sep 12 17:11:23.711475 containerd[2023]: time="2025-09-12T17:11:23.710533753Z" level=info msg="Ensure that sandbox 74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98 in task-service has been cleanup successfully" Sep 12 17:11:23.711923 containerd[2023]: time="2025-09-12T17:11:23.710500105Z" level=error msg="encountered an error cleaning up failed sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.712691 containerd[2023]: time="2025-09-12T17:11:23.712104409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-hgchj,Uid:5ba4c09f-7a52-43c3-843a-207b43648510,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.715787 kubelet[3226]: E0912 17:11:23.714221 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.715787 kubelet[3226]: E0912 17:11:23.714305 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647977b4b6-hgchj" Sep 12 17:11:23.715787 kubelet[3226]: E0912 17:11:23.714344 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647977b4b6-hgchj" Sep 12 17:11:23.716070 kubelet[3226]: E0912 17:11:23.714435 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647977b4b6-hgchj_calico-apiserver(5ba4c09f-7a52-43c3-843a-207b43648510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647977b4b6-hgchj_calico-apiserver(5ba4c09f-7a52-43c3-843a-207b43648510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647977b4b6-hgchj" podUID="5ba4c09f-7a52-43c3-843a-207b43648510" Sep 12 17:11:23.743693 kubelet[3226]: I0912 17:11:23.743542 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:23.754062 containerd[2023]: time="2025-09-12T17:11:23.754004977Z" level=info msg="StopPodSandbox for \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\"" Sep 12 17:11:23.756472 containerd[2023]: time="2025-09-12T17:11:23.754489081Z" level=info msg="Ensure that sandbox 41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a in task-service has been cleanup successfully" Sep 12 17:11:23.777387 containerd[2023]: time="2025-09-12T17:11:23.773527177Z" level=error msg="Failed to destroy network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.784070 containerd[2023]: time="2025-09-12T17:11:23.783992773Z" level=error msg="encountered an error cleaning up failed sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.784211 containerd[2023]: time="2025-09-12T17:11:23.784094953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-44ncj,Uid:345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.785871 kubelet[3226]: E0912 17:11:23.785819 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.789077 kubelet[3226]: E0912 17:11:23.786486 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647977b4b6-44ncj" Sep 12 17:11:23.789077 kubelet[3226]: E0912 17:11:23.788601 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647977b4b6-44ncj" Sep 12 17:11:23.789077 kubelet[3226]: E0912 17:11:23.788698 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647977b4b6-44ncj_calico-apiserver(345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647977b4b6-44ncj_calico-apiserver(345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647977b4b6-44ncj" podUID="345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0" Sep 12 17:11:23.790640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65-shm.mount: Deactivated successfully. Sep 12 17:11:23.844008 containerd[2023]: time="2025-09-12T17:11:23.843620533Z" level=error msg="Failed to destroy network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.847792 containerd[2023]: time="2025-09-12T17:11:23.847422493Z" level=error msg="encountered an error cleaning up failed sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.848998 containerd[2023]: time="2025-09-12T17:11:23.847993237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nrhn,Uid:ae5d657e-9e31-4c9f-9e66-064135056e24,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.850956 kubelet[3226]: E0912 17:11:23.850621 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.850956 kubelet[3226]: E0912 17:11:23.850721 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9nrhn" Sep 12 17:11:23.850956 kubelet[3226]: E0912 17:11:23.850775 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9nrhn" Sep 12 17:11:23.853329 kubelet[3226]: E0912 17:11:23.850852 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9nrhn_calico-system(ae5d657e-9e31-4c9f-9e66-064135056e24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9nrhn_calico-system(ae5d657e-9e31-4c9f-9e66-064135056e24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:23.851107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6-shm.mount: Deactivated successfully. Sep 12 17:11:23.870361 containerd[2023]: time="2025-09-12T17:11:23.870183529Z" level=error msg="StopPodSandbox for \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\" failed" error="failed to destroy network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.870946 kubelet[3226]: E0912 17:11:23.870863 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:23.871087 kubelet[3226]: E0912 17:11:23.870959 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98"} Sep 12 17:11:23.871087 kubelet[3226]: E0912 17:11:23.871044 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"175bb5e7-5b87-45af-af71-37ac118306c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:23.871387 kubelet[3226]: E0912 17:11:23.871087 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"175bb5e7-5b87-45af-af71-37ac118306c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8dc95788b-bgx5f" podUID="175bb5e7-5b87-45af-af71-37ac118306c2" Sep 12 17:11:23.892877 containerd[2023]: time="2025-09-12T17:11:23.892491398Z" level=error msg="StopPodSandbox for \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\" failed" error="failed to destroy network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:23.893055 kubelet[3226]: E0912 17:11:23.892854 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:23.893055 kubelet[3226]: E0912 17:11:23.892920 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a"} Sep 12 17:11:23.893055 kubelet[3226]: E0912 17:11:23.892977 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"586bc779-7033-4130-9b97-e99099aaf59c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:23.893055 kubelet[3226]: E0912 17:11:23.893017 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"586bc779-7033-4130-9b97-e99099aaf59c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gdmfq" podUID="586bc779-7033-4130-9b97-e99099aaf59c" Sep 12 17:11:24.152986 kubelet[3226]: E0912 17:11:24.152924 3226 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:24.153165 kubelet[3226]: E0912 17:11:24.153056 3226 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-ca-bundle podName:068a7ef4-ec44-4071-aa63-64c2517f0138 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:24.653022679 +0000 UTC m=+39.576153299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-ca-bundle") pod "whisker-57b5cb98ff-kxfbh" (UID: "068a7ef4-ec44-4071-aa63-64c2517f0138") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:24.153527 kubelet[3226]: E0912 17:11:24.152925 3226 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 12 17:11:24.153527 kubelet[3226]: E0912 17:11:24.153404 3226 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-backend-key-pair podName:068a7ef4-ec44-4071-aa63-64c2517f0138 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:24.653383651 +0000 UTC m=+39.576514259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-backend-key-pair") pod "whisker-57b5cb98ff-kxfbh" (UID: "068a7ef4-ec44-4071-aa63-64c2517f0138") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:11:24.225242 containerd[2023]: time="2025-09-12T17:11:24.224745167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vszm6,Uid:94f00bb7-79a9-4162-8a58-e6291343a943,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:24.338359 containerd[2023]: time="2025-09-12T17:11:24.338275848Z" level=error msg="Failed to destroy network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.339035 containerd[2023]: time="2025-09-12T17:11:24.338966292Z" level=error msg="encountered an error cleaning up failed sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.339118 containerd[2023]: time="2025-09-12T17:11:24.339060348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vszm6,Uid:94f00bb7-79a9-4162-8a58-e6291343a943,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.339477 kubelet[3226]: E0912 17:11:24.339396 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.339614 kubelet[3226]: E0912 17:11:24.339559 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-vszm6" Sep 12 17:11:24.339716 kubelet[3226]: E0912 17:11:24.339623 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-vszm6" Sep 12 17:11:24.339784 kubelet[3226]: E0912 17:11:24.339694 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-vszm6_calico-system(94f00bb7-79a9-4162-8a58-e6291343a943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-vszm6_calico-system(94f00bb7-79a9-4162-8a58-e6291343a943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-vszm6" podUID="94f00bb7-79a9-4162-8a58-e6291343a943" Sep 12 17:11:24.734268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610-shm.mount: Deactivated successfully. Sep 12 17:11:24.749132 kubelet[3226]: I0912 17:11:24.749057 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:24.750498 containerd[2023]: time="2025-09-12T17:11:24.750324770Z" level=info msg="StopPodSandbox for \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\"" Sep 12 17:11:24.752043 containerd[2023]: time="2025-09-12T17:11:24.751244330Z" level=info msg="Ensure that sandbox 70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6 in task-service has been cleanup successfully" Sep 12 17:11:24.754291 kubelet[3226]: I0912 17:11:24.754230 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:24.760251 containerd[2023]: time="2025-09-12T17:11:24.757701962Z" level=info msg="StopPodSandbox for \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\"" Sep 12 17:11:24.763053 containerd[2023]: time="2025-09-12T17:11:24.761622074Z" level=info msg="Ensure that sandbox d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65 in task-service has been cleanup successfully" Sep 12 17:11:24.769144 kubelet[3226]: I0912 17:11:24.769091 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:24.774927 containerd[2023]: time="2025-09-12T17:11:24.773775494Z" level=info msg="StopPodSandbox for \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\"" Sep 12 17:11:24.778026 containerd[2023]: time="2025-09-12T17:11:24.777740150Z" level=info msg="Ensure that sandbox 749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770 in task-service has been cleanup successfully" Sep 12 17:11:24.785573 kubelet[3226]: I0912 17:11:24.785388 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:24.795021 containerd[2023]: time="2025-09-12T17:11:24.792887954Z" level=info msg="StopPodSandbox for \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\"" Sep 12 17:11:24.795311 containerd[2023]: time="2025-09-12T17:11:24.795255122Z" level=info msg="Ensure that sandbox f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc in task-service has been cleanup successfully" Sep 12 17:11:24.803566 kubelet[3226]: I0912 17:11:24.803504 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:24.809724 containerd[2023]: time="2025-09-12T17:11:24.807373670Z" level=info msg="StopPodSandbox for \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\"" Sep 12 17:11:24.809724 containerd[2023]: time="2025-09-12T17:11:24.807714602Z" level=info msg="Ensure that sandbox 8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610 in task-service has been cleanup successfully" Sep 12 17:11:24.813612 containerd[2023]: time="2025-09-12T17:11:24.813549362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57b5cb98ff-kxfbh,Uid:068a7ef4-ec44-4071-aa63-64c2517f0138,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:24.968720 containerd[2023]: time="2025-09-12T17:11:24.968641827Z" level=error msg="StopPodSandbox for \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\" failed" error="failed to destroy network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.969468 kubelet[3226]: E0912 17:11:24.968937 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:24.969468 kubelet[3226]: E0912 17:11:24.969011 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610"} Sep 12 17:11:24.969468 kubelet[3226]: E0912 17:11:24.969072 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94f00bb7-79a9-4162-8a58-e6291343a943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:24.969468 kubelet[3226]: E0912 17:11:24.969123 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94f00bb7-79a9-4162-8a58-e6291343a943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-vszm6" podUID="94f00bb7-79a9-4162-8a58-e6291343a943" Sep 12 17:11:24.991419 containerd[2023]: time="2025-09-12T17:11:24.991241715Z" level=error msg="StopPodSandbox for \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\" failed" error="failed to destroy network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.991863 kubelet[3226]: E0912 17:11:24.991800 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:24.991983 kubelet[3226]: E0912 17:11:24.991895 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65"} Sep 12 17:11:24.992040 kubelet[3226]: E0912 17:11:24.991976 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:24.992144 kubelet[3226]: E0912 17:11:24.992026 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647977b4b6-44ncj" podUID="345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0" Sep 12 17:11:24.994266 containerd[2023]: time="2025-09-12T17:11:24.993780183Z" level=error msg="StopPodSandbox for \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\" failed" error="failed to destroy network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.994416 kubelet[3226]: E0912 17:11:24.994169 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:24.994416 kubelet[3226]: E0912 17:11:24.994234 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc"} Sep 12 17:11:24.994416 kubelet[3226]: E0912 17:11:24.994287 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6229e9e2-94b6-4d89-8229-2b5bbe16089b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:24.994416 kubelet[3226]: E0912 17:11:24.994332 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6229e9e2-94b6-4d89-8229-2b5bbe16089b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7dhd5" podUID="6229e9e2-94b6-4d89-8229-2b5bbe16089b" Sep 12 17:11:24.996216 containerd[2023]: time="2025-09-12T17:11:24.995993451Z" level=error msg="StopPodSandbox for \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\" failed" error="failed to destroy network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:24.996666 kubelet[3226]: E0912 17:11:24.996423 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:24.997161 kubelet[3226]: E0912 17:11:24.996984 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6"} Sep 12 17:11:24.997161 kubelet[3226]: E0912 17:11:24.997130 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae5d657e-9e31-4c9f-9e66-064135056e24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:24.997413 kubelet[3226]: E0912 17:11:24.997178 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae5d657e-9e31-4c9f-9e66-064135056e24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9nrhn" podUID="ae5d657e-9e31-4c9f-9e66-064135056e24" Sep 12 17:11:25.010399 containerd[2023]: time="2025-09-12T17:11:25.010322603Z" level=error msg="StopPodSandbox for \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\" failed" error="failed to destroy network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.011841 kubelet[3226]: E0912 17:11:25.011781 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:25.012139 kubelet[3226]: E0912 17:11:25.011855 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770"} Sep 12 17:11:25.012139 kubelet[3226]: E0912 17:11:25.011913 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ba4c09f-7a52-43c3-843a-207b43648510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:25.012139 kubelet[3226]: E0912 17:11:25.011951 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ba4c09f-7a52-43c3-843a-207b43648510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647977b4b6-hgchj" podUID="5ba4c09f-7a52-43c3-843a-207b43648510" Sep 12 17:11:25.050333 containerd[2023]: time="2025-09-12T17:11:25.050214647Z" level=error msg="Failed to destroy network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.052988 containerd[2023]: time="2025-09-12T17:11:25.052916279Z" level=error msg="encountered an error cleaning up failed sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.053126 containerd[2023]: time="2025-09-12T17:11:25.053019911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57b5cb98ff-kxfbh,Uid:068a7ef4-ec44-4071-aa63-64c2517f0138,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.054870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a-shm.mount: Deactivated successfully. Sep 12 17:11:25.055611 kubelet[3226]: E0912 17:11:25.055230 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.055611 kubelet[3226]: E0912 17:11:25.055310 3226 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57b5cb98ff-kxfbh" Sep 12 17:11:25.055611 kubelet[3226]: E0912 17:11:25.055351 3226 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57b5cb98ff-kxfbh" Sep 12 17:11:25.055918 kubelet[3226]: E0912 17:11:25.055413 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57b5cb98ff-kxfbh_calico-system(068a7ef4-ec44-4071-aa63-64c2517f0138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57b5cb98ff-kxfbh_calico-system(068a7ef4-ec44-4071-aa63-64c2517f0138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57b5cb98ff-kxfbh" podUID="068a7ef4-ec44-4071-aa63-64c2517f0138" Sep 12 17:11:25.809473 kubelet[3226]: I0912 17:11:25.809346 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:25.815919 containerd[2023]: time="2025-09-12T17:11:25.815713647Z" level=info msg="StopPodSandbox for \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\"" Sep 12 17:11:25.817497 containerd[2023]: time="2025-09-12T17:11:25.816175947Z" level=info msg="Ensure that sandbox 0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a in task-service has been cleanup successfully" Sep 12 17:11:25.901935 containerd[2023]: time="2025-09-12T17:11:25.901866496Z" level=error msg="StopPodSandbox for \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\" failed" error="failed to destroy network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.902523 kubelet[3226]: E0912 17:11:25.902252 3226 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:25.902523 kubelet[3226]: E0912 17:11:25.902322 3226 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a"} Sep 12 17:11:25.902523 kubelet[3226]: E0912 17:11:25.902386 3226 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"068a7ef4-ec44-4071-aa63-64c2517f0138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:25.902523 kubelet[3226]: E0912 17:11:25.902425 3226 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"068a7ef4-ec44-4071-aa63-64c2517f0138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57b5cb98ff-kxfbh" podUID="068a7ef4-ec44-4071-aa63-64c2517f0138" Sep 12 17:11:31.723173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245562566.mount: Deactivated successfully. Sep 12 17:11:31.855583 containerd[2023]: time="2025-09-12T17:11:31.855504897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:31.857132 containerd[2023]: time="2025-09-12T17:11:31.856929441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:11:31.859474 containerd[2023]: time="2025-09-12T17:11:31.858033957Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:31.861996 containerd[2023]: time="2025-09-12T17:11:31.861944193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:31.863678 containerd[2023]: time="2025-09-12T17:11:31.863601477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 8.183907209s" Sep 12 17:11:31.863806 containerd[2023]: time="2025-09-12T17:11:31.863680233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:11:31.906971 containerd[2023]: time="2025-09-12T17:11:31.906788169Z" level=info msg="CreateContainer within sandbox \"4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:11:32.020413 containerd[2023]: time="2025-09-12T17:11:32.020341950Z" level=info msg="CreateContainer within sandbox \"4ead56d660139e4a20c651d4f712f3d1ca58667323314f31158e84aa19424b07\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3df49f6f36d5a270edf6c24644730169cb9774b1168c48fd384ab71f2d9809df\"" Sep 12 17:11:32.023047 containerd[2023]: time="2025-09-12T17:11:32.021754146Z" level=info msg="StartContainer for \"3df49f6f36d5a270edf6c24644730169cb9774b1168c48fd384ab71f2d9809df\"" Sep 12 17:11:32.080995 systemd[1]: Started cri-containerd-3df49f6f36d5a270edf6c24644730169cb9774b1168c48fd384ab71f2d9809df.scope - libcontainer container 3df49f6f36d5a270edf6c24644730169cb9774b1168c48fd384ab71f2d9809df. Sep 12 17:11:32.146286 containerd[2023]: time="2025-09-12T17:11:32.146170879Z" level=info msg="StartContainer for \"3df49f6f36d5a270edf6c24644730169cb9774b1168c48fd384ab71f2d9809df\" returns successfully" Sep 12 17:11:32.419024 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:11:32.419300 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:11:32.679587 containerd[2023]: time="2025-09-12T17:11:32.679536033Z" level=info msg="StopPodSandbox for \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\"" Sep 12 17:11:32.915016 kubelet[3226]: I0912 17:11:32.914914 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jbw2x" podStartSLOduration=2.493439306 podStartE2EDuration="21.914881978s" podCreationTimestamp="2025-09-12 17:11:11 +0000 UTC" firstStartedPulling="2025-09-12 17:11:12.443943157 +0000 UTC m=+27.367073777" lastFinishedPulling="2025-09-12 17:11:31.865385841 +0000 UTC m=+46.788516449" observedRunningTime="2025-09-12 17:11:32.913499218 +0000 UTC m=+47.836629862" watchObservedRunningTime="2025-09-12 17:11:32.914881978 +0000 UTC m=+47.838012598" Sep 12 17:11:34.463098 containerd[2023]: time="2025-09-12T17:11:34.461480362Z" level=info msg="StopPodSandbox for \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\"" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.317 [INFO][4621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.319 [INFO][4621] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" iface="eth0" netns="/var/run/netns/cni-0bc1be22-8665-dca7-caf3-05fc5a36d9d8" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.321 [INFO][4621] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" iface="eth0" netns="/var/run/netns/cni-0bc1be22-8665-dca7-caf3-05fc5a36d9d8" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.366 [INFO][4621] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" iface="eth0" netns="/var/run/netns/cni-0bc1be22-8665-dca7-caf3-05fc5a36d9d8" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.366 [INFO][4621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.366 [INFO][4621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.565 [INFO][4715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.566 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.566 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.599 [WARNING][4715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.599 [INFO][4715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.603 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:34.618867 containerd[2023]: 2025-09-12 17:11:34.614 [INFO][4621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:34.630301 systemd[1]: run-netns-cni\x2d0bc1be22\x2d8665\x2ddca7\x2dcaf3\x2d05fc5a36d9d8.mount: Deactivated successfully. Sep 12 17:11:34.636201 containerd[2023]: time="2025-09-12T17:11:34.635492471Z" level=info msg="TearDown network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\" successfully" Sep 12 17:11:34.636201 containerd[2023]: time="2025-09-12T17:11:34.635552555Z" level=info msg="StopPodSandbox for \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\" returns successfully" Sep 12 17:11:34.752749 kubelet[3226]: I0912 17:11:34.751833 3226 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqxjf\" (UniqueName: \"kubernetes.io/projected/068a7ef4-ec44-4071-aa63-64c2517f0138-kube-api-access-jqxjf\") pod \"068a7ef4-ec44-4071-aa63-64c2517f0138\" (UID: \"068a7ef4-ec44-4071-aa63-64c2517f0138\") " Sep 12 17:11:34.752749 kubelet[3226]: I0912 17:11:34.751928 3226 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-ca-bundle\") pod \"068a7ef4-ec44-4071-aa63-64c2517f0138\" (UID: \"068a7ef4-ec44-4071-aa63-64c2517f0138\") " Sep 12 17:11:34.752749 kubelet[3226]: I0912 17:11:34.751996 3226 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-backend-key-pair\") pod \"068a7ef4-ec44-4071-aa63-64c2517f0138\" (UID: \"068a7ef4-ec44-4071-aa63-64c2517f0138\") " Sep 12 17:11:34.764561 kubelet[3226]: I0912 17:11:34.763221 3226 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "068a7ef4-ec44-4071-aa63-64c2517f0138" (UID: "068a7ef4-ec44-4071-aa63-64c2517f0138"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:11:34.774159 systemd[1]: var-lib-kubelet-pods-068a7ef4\x2dec44\x2d4071\x2daa63\x2d64c2517f0138-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:11:34.777384 kubelet[3226]: I0912 17:11:34.775921 3226 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "068a7ef4-ec44-4071-aa63-64c2517f0138" (UID: "068a7ef4-ec44-4071-aa63-64c2517f0138"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:11:34.774379 systemd[1]: var-lib-kubelet-pods-068a7ef4\x2dec44\x2d4071\x2daa63\x2d64c2517f0138-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqxjf.mount: Deactivated successfully. Sep 12 17:11:34.780118 kubelet[3226]: I0912 17:11:34.779436 3226 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068a7ef4-ec44-4071-aa63-64c2517f0138-kube-api-access-jqxjf" (OuterVolumeSpecName: "kube-api-access-jqxjf") pod "068a7ef4-ec44-4071-aa63-64c2517f0138" (UID: "068a7ef4-ec44-4071-aa63-64c2517f0138"). InnerVolumeSpecName "kube-api-access-jqxjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.625 [INFO][4765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.628 [INFO][4765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" iface="eth0" netns="/var/run/netns/cni-5a39e36e-9509-eb1f-0e9f-8a01569ecf96" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.630 [INFO][4765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" iface="eth0" netns="/var/run/netns/cni-5a39e36e-9509-eb1f-0e9f-8a01569ecf96" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.630 [INFO][4765] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" iface="eth0" netns="/var/run/netns/cni-5a39e36e-9509-eb1f-0e9f-8a01569ecf96" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.631 [INFO][4765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.631 [INFO][4765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.734 [INFO][4796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.734 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.734 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.748 [WARNING][4796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.749 [INFO][4796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.759 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:34.782297 containerd[2023]: 2025-09-12 17:11:34.770 [INFO][4765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:34.792737 systemd[1]: run-netns-cni\x2d5a39e36e\x2d9509\x2deb1f\x2d0e9f\x2d8a01569ecf96.mount: Deactivated successfully. Sep 12 17:11:34.795015 containerd[2023]: time="2025-09-12T17:11:34.794630532Z" level=info msg="TearDown network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\" successfully" Sep 12 17:11:34.795015 containerd[2023]: time="2025-09-12T17:11:34.794682672Z" level=info msg="StopPodSandbox for \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\" returns successfully" Sep 12 17:11:34.798628 containerd[2023]: time="2025-09-12T17:11:34.798459504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdmfq,Uid:586bc779-7033-4130-9b97-e99099aaf59c,Namespace:kube-system,Attempt:1,}" Sep 12 17:11:34.853107 kubelet[3226]: I0912 17:11:34.853059 3226 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-backend-key-pair\") on node \"ip-172-31-22-10\" DevicePath \"\"" Sep 12 17:11:34.855384 kubelet[3226]: I0912 17:11:34.853495 3226 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jqxjf\" (UniqueName: \"kubernetes.io/projected/068a7ef4-ec44-4071-aa63-64c2517f0138-kube-api-access-jqxjf\") on node \"ip-172-31-22-10\" DevicePath \"\"" Sep 12 17:11:34.855384 kubelet[3226]: I0912 17:11:34.853529 3226 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/068a7ef4-ec44-4071-aa63-64c2517f0138-whisker-ca-bundle\") on node \"ip-172-31-22-10\" DevicePath \"\"" Sep 12 17:11:34.891228 systemd[1]: Removed slice kubepods-besteffort-pod068a7ef4_ec44_4071_aa63_64c2517f0138.slice - libcontainer container kubepods-besteffort-pod068a7ef4_ec44_4071_aa63_64c2517f0138.slice. Sep 12 17:11:35.040139 systemd[1]: Created slice kubepods-besteffort-pod161fab82_1a8a_451e_a83a_d466a9dbc262.slice - libcontainer container kubepods-besteffort-pod161fab82_1a8a_451e_a83a_d466a9dbc262.slice. Sep 12 17:11:35.162967 kubelet[3226]: I0912 17:11:35.162566 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzkcc\" (UniqueName: \"kubernetes.io/projected/161fab82-1a8a-451e-a83a-d466a9dbc262-kube-api-access-hzkcc\") pod \"whisker-5bb7568d94-w2sxb\" (UID: \"161fab82-1a8a-451e-a83a-d466a9dbc262\") " pod="calico-system/whisker-5bb7568d94-w2sxb" Sep 12 17:11:35.162967 kubelet[3226]: I0912 17:11:35.162663 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/161fab82-1a8a-451e-a83a-d466a9dbc262-whisker-backend-key-pair\") pod \"whisker-5bb7568d94-w2sxb\" (UID: \"161fab82-1a8a-451e-a83a-d466a9dbc262\") " pod="calico-system/whisker-5bb7568d94-w2sxb" Sep 12 17:11:35.162967 kubelet[3226]: I0912 17:11:35.162702 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/161fab82-1a8a-451e-a83a-d466a9dbc262-whisker-ca-bundle\") pod \"whisker-5bb7568d94-w2sxb\" (UID: \"161fab82-1a8a-451e-a83a-d466a9dbc262\") " pod="calico-system/whisker-5bb7568d94-w2sxb" Sep 12 17:11:35.237993 (udev-worker)[4605]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:35.239913 systemd-networkd[1937]: calie16eead0b86: Link UP Sep 12 17:11:35.244174 systemd-networkd[1937]: calie16eead0b86: Gained carrier Sep 12 17:11:35.351613 containerd[2023]: time="2025-09-12T17:11:35.350432746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb7568d94-w2sxb,Uid:161fab82-1a8a-451e-a83a-d466a9dbc262,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:34.941 [INFO][4809] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:34.995 [INFO][4809] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0 coredns-668d6bf9bc- kube-system 586bc779-7033-4130-9b97-e99099aaf59c 900 0 2025-09-12 17:10:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-10 coredns-668d6bf9bc-gdmfq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie16eead0b86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:34.995 [INFO][4809] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.119 [INFO][4816] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" HandleID="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.119 [INFO][4816] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" HandleID="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103730), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-10", "pod":"coredns-668d6bf9bc-gdmfq", "timestamp":"2025-09-12 17:11:35.119630337 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.120 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.120 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.120 [INFO][4816] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.153 [INFO][4816] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.170 [INFO][4816] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.179 [INFO][4816] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.184 [INFO][4816] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.190 [INFO][4816] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.190 [INFO][4816] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.192 [INFO][4816] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5 Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.199 [INFO][4816] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.213 [INFO][4816] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.193/26] block=192.168.16.192/26 handle="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.213 [INFO][4816] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.193/26] handle="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" host="ip-172-31-22-10" Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.214 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:35.455856 containerd[2023]: 2025-09-12 17:11:35.214 [INFO][4816] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.193/26] IPv6=[] ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" HandleID="k8s-pod-network.0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:35.462683 containerd[2023]: 2025-09-12 17:11:35.220 [INFO][4809] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"586bc779-7033-4130-9b97-e99099aaf59c", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"coredns-668d6bf9bc-gdmfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie16eead0b86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:35.462683 containerd[2023]: 2025-09-12 17:11:35.221 [INFO][4809] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.193/32] ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:35.462683 containerd[2023]: 2025-09-12 17:11:35.221 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie16eead0b86 ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:35.462683 containerd[2023]: 2025-09-12 17:11:35.284 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:35.462683 containerd[2023]: 2025-09-12 17:11:35.285 [INFO][4809] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"586bc779-7033-4130-9b97-e99099aaf59c", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5", Pod:"coredns-668d6bf9bc-gdmfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie16eead0b86", MAC:"06:02:24:32:99:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:35.462683 containerd[2023]: 2025-09-12 17:11:35.443 [INFO][4809] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdmfq" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:35.470338 containerd[2023]: time="2025-09-12T17:11:35.470217299Z" level=info msg="StopPodSandbox for \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\"" Sep 12 17:11:35.475177 containerd[2023]: time="2025-09-12T17:11:35.475089227Z" level=info msg="StopPodSandbox for \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\"" Sep 12 17:11:35.483793 kubelet[3226]: I0912 17:11:35.482298 3226 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068a7ef4-ec44-4071-aa63-64c2517f0138" path="/var/lib/kubelet/pods/068a7ef4-ec44-4071-aa63-64c2517f0138/volumes" Sep 12 17:11:35.634462 containerd[2023]: time="2025-09-12T17:11:35.633960768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:35.635133 containerd[2023]: time="2025-09-12T17:11:35.634377708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:35.637242 containerd[2023]: time="2025-09-12T17:11:35.636671316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:35.640045 containerd[2023]: time="2025-09-12T17:11:35.639102576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:35.793696 systemd[1]: Started cri-containerd-0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5.scope - libcontainer container 0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5. Sep 12 17:11:36.111622 systemd-networkd[1937]: cali588c5355d95: Link UP Sep 12 17:11:36.117844 systemd-networkd[1937]: cali588c5355d95: Gained carrier Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.595 [INFO][4832] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.648 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0 whisker-5bb7568d94- calico-system 161fab82-1a8a-451e-a83a-d466a9dbc262 913 0 2025-09-12 17:11:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bb7568d94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-10 whisker-5bb7568d94-w2sxb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali588c5355d95 [] [] }} ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.649 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.790 [INFO][4897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" HandleID="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Workload="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.790 [INFO][4897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" HandleID="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Workload="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000369670), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-10", "pod":"whisker-5bb7568d94-w2sxb", "timestamp":"2025-09-12 17:11:35.790211125 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.791 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.791 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.791 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.851 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.865 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.904 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.937 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.981 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.981 [INFO][4897] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:35.997 [INFO][4897] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7 Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:36.024 [INFO][4897] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:36.087 [INFO][4897] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.194/26] block=192.168.16.192/26 handle="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:36.087 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.194/26] handle="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" host="ip-172-31-22-10" Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:36.087 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:36.237517 containerd[2023]: 2025-09-12 17:11:36.087 [INFO][4897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.194/26] IPv6=[] ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" HandleID="k8s-pod-network.f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Workload="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" Sep 12 17:11:36.239758 containerd[2023]: 2025-09-12 17:11:36.100 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0", GenerateName:"whisker-5bb7568d94-", Namespace:"calico-system", SelfLink:"", UID:"161fab82-1a8a-451e-a83a-d466a9dbc262", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bb7568d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"whisker-5bb7568d94-w2sxb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali588c5355d95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:36.239758 containerd[2023]: 2025-09-12 17:11:36.101 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.194/32] ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" Sep 12 17:11:36.239758 containerd[2023]: 2025-09-12 17:11:36.101 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali588c5355d95 ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" Sep 12 17:11:36.239758 containerd[2023]: 2025-09-12 17:11:36.110 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" Sep 12 17:11:36.239758 containerd[2023]: 2025-09-12 17:11:36.126 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0", GenerateName:"whisker-5bb7568d94-", Namespace:"calico-system", SelfLink:"", UID:"161fab82-1a8a-451e-a83a-d466a9dbc262", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bb7568d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7", Pod:"whisker-5bb7568d94-w2sxb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali588c5355d95", MAC:"82:67:00:74:cb:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:36.239758 containerd[2023]: 2025-09-12 17:11:36.216 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7" Namespace="calico-system" Pod="whisker-5bb7568d94-w2sxb" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--5bb7568d94--w2sxb-eth0" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.820 [INFO][4866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.821 [INFO][4866] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" iface="eth0" netns="/var/run/netns/cni-2d03c7fb-3739-b16b-c612-ce2c41ccc427" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.822 [INFO][4866] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" iface="eth0" netns="/var/run/netns/cni-2d03c7fb-3739-b16b-c612-ce2c41ccc427" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.823 [INFO][4866] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" iface="eth0" netns="/var/run/netns/cni-2d03c7fb-3739-b16b-c612-ce2c41ccc427" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.824 [INFO][4866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.825 [INFO][4866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.985 [INFO][4919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:35.985 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:36.092 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:36.206 [WARNING][4919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:36.206 [INFO][4919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:36.230 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:36.275682 containerd[2023]: 2025-09-12 17:11:36.256 [INFO][4866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:36.287259 systemd[1]: run-netns-cni\x2d2d03c7fb\x2d3739\x2db16b\x2dc612\x2dce2c41ccc427.mount: Deactivated successfully. Sep 12 17:11:36.299926 containerd[2023]: time="2025-09-12T17:11:36.299850803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdmfq,Uid:586bc779-7033-4130-9b97-e99099aaf59c,Namespace:kube-system,Attempt:1,} returns sandbox id \"0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5\"" Sep 12 17:11:36.316977 containerd[2023]: time="2025-09-12T17:11:36.316902467Z" level=info msg="CreateContainer within sandbox \"0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:36.320516 containerd[2023]: time="2025-09-12T17:11:36.318641651Z" level=info msg="TearDown network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\" successfully" Sep 12 17:11:36.320516 containerd[2023]: time="2025-09-12T17:11:36.318697655Z" level=info msg="StopPodSandbox for \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\" returns successfully" Sep 12 17:11:36.321088 containerd[2023]: time="2025-09-12T17:11:36.321015995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-44ncj,Uid:345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.059 [INFO][4873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.060 [INFO][4873] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" iface="eth0" netns="/var/run/netns/cni-cddda654-06a9-cc9f-32ae-624d559c21a0" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.061 [INFO][4873] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" iface="eth0" netns="/var/run/netns/cni-cddda654-06a9-cc9f-32ae-624d559c21a0" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.063 [INFO][4873] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" iface="eth0" netns="/var/run/netns/cni-cddda654-06a9-cc9f-32ae-624d559c21a0" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.063 [INFO][4873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.064 [INFO][4873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.188 [INFO][4930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.189 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.234 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.313 [WARNING][4930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.314 [INFO][4930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.332 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:36.419708 containerd[2023]: 2025-09-12 17:11:36.362 [INFO][4873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:36.448957 containerd[2023]: time="2025-09-12T17:11:36.446349264Z" level=info msg="TearDown network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\" successfully" Sep 12 17:11:36.448957 containerd[2023]: time="2025-09-12T17:11:36.446403444Z" level=info msg="StopPodSandbox for \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\" returns successfully" Sep 12 17:11:36.448334 systemd-networkd[1937]: calie16eead0b86: Gained IPv6LL Sep 12 17:11:36.449722 systemd[1]: run-netns-cni\x2dcddda654\x2d06a9\x2dcc9f\x2d32ae\x2d624d559c21a0.mount: Deactivated successfully. Sep 12 17:11:36.466425 containerd[2023]: time="2025-09-12T17:11:36.465234480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8dc95788b-bgx5f,Uid:175bb5e7-5b87-45af-af71-37ac118306c2,Namespace:calico-system,Attempt:1,}" Sep 12 17:11:36.521472 containerd[2023]: time="2025-09-12T17:11:36.519118380Z" level=info msg="CreateContainer within sandbox \"0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f6af5e352a5e75bcc5a5355ecdbd9162e92e0ad339816ee61cefefb980adbf5\"" Sep 12 17:11:36.526687 containerd[2023]: time="2025-09-12T17:11:36.526608948Z" level=info msg="StartContainer for \"3f6af5e352a5e75bcc5a5355ecdbd9162e92e0ad339816ee61cefefb980adbf5\"" Sep 12 17:11:36.539851 containerd[2023]: time="2025-09-12T17:11:36.534887244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:36.539851 containerd[2023]: time="2025-09-12T17:11:36.534994980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:36.539851 containerd[2023]: time="2025-09-12T17:11:36.535046220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:36.539851 containerd[2023]: time="2025-09-12T17:11:36.535232376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:36.655001 systemd[1]: Started cri-containerd-f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7.scope - libcontainer container f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7. Sep 12 17:11:36.777820 systemd[1]: Started cri-containerd-3f6af5e352a5e75bcc5a5355ecdbd9162e92e0ad339816ee61cefefb980adbf5.scope - libcontainer container 3f6af5e352a5e75bcc5a5355ecdbd9162e92e0ad339816ee61cefefb980adbf5. Sep 12 17:11:37.023183 containerd[2023]: time="2025-09-12T17:11:37.023128823Z" level=info msg="StartContainer for \"3f6af5e352a5e75bcc5a5355ecdbd9162e92e0ad339816ee61cefefb980adbf5\" returns successfully" Sep 12 17:11:37.137942 systemd-networkd[1937]: cali12385b46600: Link UP Sep 12 17:11:37.138289 systemd-networkd[1937]: cali12385b46600: Gained carrier Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:36.817 [INFO][4978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0 calico-kube-controllers-8dc95788b- calico-system 175bb5e7-5b87-45af-af71-37ac118306c2 926 0 2025-09-12 17:11:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8dc95788b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-10 calico-kube-controllers-8dc95788b-bgx5f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali12385b46600 [] [] }} ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:36.818 [INFO][4978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:36.993 [INFO][5039] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" HandleID="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:36.993 [INFO][5039] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" HandleID="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000279410), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-10", "pod":"calico-kube-controllers-8dc95788b-bgx5f", "timestamp":"2025-09-12 17:11:36.992183811 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:36.993 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:36.994 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:36.994 [INFO][5039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.043 [INFO][5039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.059 [INFO][5039] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.082 [INFO][5039] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.090 [INFO][5039] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.095 [INFO][5039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.095 [INFO][5039] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.098 [INFO][5039] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754 Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.109 [INFO][5039] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.124 [INFO][5039] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.195/26] block=192.168.16.192/26 handle="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.124 [INFO][5039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.195/26] handle="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" host="ip-172-31-22-10" Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.125 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:37.198585 containerd[2023]: 2025-09-12 17:11:37.125 [INFO][5039] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.195/26] IPv6=[] ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" HandleID="k8s-pod-network.af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:37.201308 containerd[2023]: 2025-09-12 17:11:37.131 [INFO][4978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0", GenerateName:"calico-kube-controllers-8dc95788b-", Namespace:"calico-system", SelfLink:"", UID:"175bb5e7-5b87-45af-af71-37ac118306c2", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8dc95788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"calico-kube-controllers-8dc95788b-bgx5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12385b46600", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:37.201308 containerd[2023]: 2025-09-12 17:11:37.131 [INFO][4978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.195/32] ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:37.201308 containerd[2023]: 2025-09-12 17:11:37.131 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12385b46600 ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:37.201308 containerd[2023]: 2025-09-12 17:11:37.138 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:37.201308 containerd[2023]: 2025-09-12 17:11:37.143 [INFO][4978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0", GenerateName:"calico-kube-controllers-8dc95788b-", Namespace:"calico-system", SelfLink:"", UID:"175bb5e7-5b87-45af-af71-37ac118306c2", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8dc95788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754", Pod:"calico-kube-controllers-8dc95788b-bgx5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12385b46600", MAC:"8e:6a:19:b5:f8:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:37.201308 containerd[2023]: 2025-09-12 17:11:37.193 [INFO][4978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754" Namespace="calico-system" Pod="calico-kube-controllers-8dc95788b-bgx5f" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:37.259462 containerd[2023]: time="2025-09-12T17:11:37.258519216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:37.260136 containerd[2023]: time="2025-09-12T17:11:37.259787184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:37.260667 containerd[2023]: time="2025-09-12T17:11:37.260087112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:37.262358 containerd[2023]: time="2025-09-12T17:11:37.261389172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:37.360169 systemd[1]: Started cri-containerd-af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754.scope - libcontainer container af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754. Sep 12 17:11:37.386213 systemd-networkd[1937]: cali3033665294a: Link UP Sep 12 17:11:37.390179 systemd-networkd[1937]: cali3033665294a: Gained carrier Sep 12 17:11:37.392580 kernel: bpftool[5131]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:36.867 [INFO][4959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0 calico-apiserver-647977b4b6- calico-apiserver 345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0 923 0 2025-09-12 17:11:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647977b4b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-10 calico-apiserver-647977b4b6-44ncj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3033665294a [] [] }} ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:36.869 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.075 [INFO][5047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" HandleID="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.076 [INFO][5047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" HandleID="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-10", "pod":"calico-apiserver-647977b4b6-44ncj", "timestamp":"2025-09-12 17:11:37.075330779 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.077 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.124 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.124 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.174 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.202 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.228 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.237 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.247 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.247 [INFO][5047] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.251 [INFO][5047] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6 Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.282 [INFO][5047] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.322 [INFO][5047] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.196/26] block=192.168.16.192/26 handle="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.323 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.196/26] handle="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" host="ip-172-31-22-10" Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.323 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:37.425714 containerd[2023]: 2025-09-12 17:11:37.323 [INFO][5047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.196/26] IPv6=[] ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" HandleID="k8s-pod-network.d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:37.428316 containerd[2023]: 2025-09-12 17:11:37.338 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"calico-apiserver-647977b4b6-44ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3033665294a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:37.428316 containerd[2023]: 2025-09-12 17:11:37.339 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.196/32] ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:37.428316 containerd[2023]: 2025-09-12 17:11:37.341 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3033665294a ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:37.428316 containerd[2023]: 2025-09-12 17:11:37.395 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:37.428316 containerd[2023]: 2025-09-12 17:11:37.397 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6", Pod:"calico-apiserver-647977b4b6-44ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3033665294a", MAC:"6e:6c:8f:f5:bf:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:37.428316 containerd[2023]: 2025-09-12 17:11:37.419 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-44ncj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:37.465180 containerd[2023]: time="2025-09-12T17:11:37.465125017Z" level=info msg="StopPodSandbox for \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\"" Sep 12 17:11:37.467761 containerd[2023]: time="2025-09-12T17:11:37.467666377Z" level=info msg="StopPodSandbox for \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\"" Sep 12 17:11:37.570032 containerd[2023]: time="2025-09-12T17:11:37.566531161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:37.570032 containerd[2023]: time="2025-09-12T17:11:37.566741965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:37.570032 containerd[2023]: time="2025-09-12T17:11:37.566784193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:37.570032 containerd[2023]: time="2025-09-12T17:11:37.566972701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:37.663992 systemd[1]: Started cri-containerd-d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6.scope - libcontainer container d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6. Sep 12 17:11:37.692571 containerd[2023]: time="2025-09-12T17:11:37.692240990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb7568d94-w2sxb,Uid:161fab82-1a8a-451e-a83a-d466a9dbc262,Namespace:calico-system,Attempt:0,} returns sandbox id \"f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7\"" Sep 12 17:11:37.700107 containerd[2023]: time="2025-09-12T17:11:37.700018634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.815 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.817 [INFO][5168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" iface="eth0" netns="/var/run/netns/cni-16fbd1d0-db03-a3c6-5a15-e297a173619a" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.817 [INFO][5168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" iface="eth0" netns="/var/run/netns/cni-16fbd1d0-db03-a3c6-5a15-e297a173619a" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.818 [INFO][5168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" iface="eth0" netns="/var/run/netns/cni-16fbd1d0-db03-a3c6-5a15-e297a173619a" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.818 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.819 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.966 [INFO][5217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.977 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.977 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.999 [WARNING][5217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:37.999 [INFO][5217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:38.004 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:38.017229 containerd[2023]: 2025-09-12 17:11:38.011 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:38.033402 systemd[1]: run-netns-cni\x2d16fbd1d0\x2ddb03\x2da3c6\x2d5a15\x2de297a173619a.mount: Deactivated successfully. Sep 12 17:11:38.044150 containerd[2023]: time="2025-09-12T17:11:38.044065152Z" level=info msg="TearDown network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\" successfully" Sep 12 17:11:38.044392 containerd[2023]: time="2025-09-12T17:11:38.044358132Z" level=info msg="StopPodSandbox for \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\" returns successfully" Sep 12 17:11:38.047519 containerd[2023]: time="2025-09-12T17:11:38.046109868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nrhn,Uid:ae5d657e-9e31-4c9f-9e66-064135056e24,Namespace:calico-system,Attempt:1,}" Sep 12 17:11:38.048239 systemd-networkd[1937]: cali588c5355d95: Gained IPv6LL Sep 12 17:11:38.058022 containerd[2023]: time="2025-09-12T17:11:38.056304756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8dc95788b-bgx5f,Uid:175bb5e7-5b87-45af-af71-37ac118306c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754\"" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.842 [INFO][5171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.843 [INFO][5171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" iface="eth0" netns="/var/run/netns/cni-ba28cb7d-c5a1-0ea3-b0e1-b001bc704a06" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.845 [INFO][5171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" iface="eth0" netns="/var/run/netns/cni-ba28cb7d-c5a1-0ea3-b0e1-b001bc704a06" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.846 [INFO][5171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" iface="eth0" netns="/var/run/netns/cni-ba28cb7d-c5a1-0ea3-b0e1-b001bc704a06" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.846 [INFO][5171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.846 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.995 [INFO][5222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:37.996 [INFO][5222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:38.005 [INFO][5222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:38.043 [WARNING][5222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:38.043 [INFO][5222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:38.064 [INFO][5222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:38.083746 containerd[2023]: 2025-09-12 17:11:38.073 [INFO][5171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:38.106123 systemd[1]: run-netns-cni\x2dba28cb7d\x2dc5a1\x2d0ea3\x2db0e1\x2db001bc704a06.mount: Deactivated successfully. Sep 12 17:11:38.111397 kubelet[3226]: I0912 17:11:38.111307 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gdmfq" podStartSLOduration=48.111281772 podStartE2EDuration="48.111281772s" podCreationTimestamp="2025-09-12 17:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:38.020031444 +0000 UTC m=+52.943162064" watchObservedRunningTime="2025-09-12 17:11:38.111281772 +0000 UTC m=+53.034412404" Sep 12 17:11:38.133921 containerd[2023]: time="2025-09-12T17:11:38.132625008Z" level=info msg="TearDown network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\" successfully" Sep 12 17:11:38.133921 containerd[2023]: time="2025-09-12T17:11:38.132756288Z" level=info msg="StopPodSandbox for \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\" returns successfully" Sep 12 17:11:38.138754 containerd[2023]: time="2025-09-12T17:11:38.137392308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vszm6,Uid:94f00bb7-79a9-4162-8a58-e6291343a943,Namespace:calico-system,Attempt:1,}" Sep 12 17:11:38.243190 containerd[2023]: time="2025-09-12T17:11:38.242679157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-44ncj,Uid:345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6\"" Sep 12 17:11:38.453348 systemd-networkd[1937]: calibfa914cccf7: Link UP Sep 12 17:11:38.457910 systemd-networkd[1937]: calibfa914cccf7: Gained carrier Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.285 [INFO][5239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0 csi-node-driver- calico-system ae5d657e-9e31-4c9f-9e66-064135056e24 944 0 2025-09-12 17:11:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-10 csi-node-driver-9nrhn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibfa914cccf7 [] [] }} ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.291 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.365 [INFO][5269] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" HandleID="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.365 [INFO][5269] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" HandleID="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c2040), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-10", "pod":"csi-node-driver-9nrhn", "timestamp":"2025-09-12 17:11:38.365135077 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.367 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.367 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.367 [INFO][5269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.388 [INFO][5269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.397 [INFO][5269] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.405 [INFO][5269] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.408 [INFO][5269] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.413 [INFO][5269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.413 [INFO][5269] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.416 [INFO][5269] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592 Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.425 [INFO][5269] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.436 [INFO][5269] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.197/26] block=192.168.16.192/26 handle="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.436 [INFO][5269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.197/26] handle="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" host="ip-172-31-22-10" Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.438 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:38.516356 containerd[2023]: 2025-09-12 17:11:38.438 [INFO][5269] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.197/26] IPv6=[] ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" HandleID="k8s-pod-network.656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.520137 containerd[2023]: 2025-09-12 17:11:38.445 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae5d657e-9e31-4c9f-9e66-064135056e24", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"csi-node-driver-9nrhn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfa914cccf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:38.520137 containerd[2023]: 2025-09-12 17:11:38.445 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.197/32] ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.520137 containerd[2023]: 2025-09-12 17:11:38.445 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfa914cccf7 ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.520137 containerd[2023]: 2025-09-12 17:11:38.457 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.520137 containerd[2023]: 2025-09-12 17:11:38.462 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae5d657e-9e31-4c9f-9e66-064135056e24", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592", Pod:"csi-node-driver-9nrhn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfa914cccf7", MAC:"c2:6c:f1:cc:0c:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:38.520137 containerd[2023]: 2025-09-12 17:11:38.508 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592" Namespace="calico-system" Pod="csi-node-driver-9nrhn" WorkloadEndpoint="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:38.598222 containerd[2023]: time="2025-09-12T17:11:38.597203307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:38.598406 containerd[2023]: time="2025-09-12T17:11:38.597898455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:38.598406 containerd[2023]: time="2025-09-12T17:11:38.597942051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:38.598406 containerd[2023]: time="2025-09-12T17:11:38.598129503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:38.619194 systemd-networkd[1937]: cali680037e7221: Link UP Sep 12 17:11:38.627326 systemd-networkd[1937]: cali680037e7221: Gained carrier Sep 12 17:11:38.671074 systemd[1]: Started cri-containerd-656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592.scope - libcontainer container 656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592. Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.300 [INFO][5247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0 goldmane-54d579b49d- calico-system 94f00bb7-79a9-4162-8a58-e6291343a943 945 0 2025-09-12 17:11:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-10 goldmane-54d579b49d-vszm6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali680037e7221 [] [] }} ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.301 [INFO][5247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.391 [INFO][5274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" HandleID="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.392 [INFO][5274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" HandleID="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029b690), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-10", "pod":"goldmane-54d579b49d-vszm6", "timestamp":"2025-09-12 17:11:38.391820414 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.392 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.437 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.437 [INFO][5274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.487 [INFO][5274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.510 [INFO][5274] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.529 [INFO][5274] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.534 [INFO][5274] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.545 [INFO][5274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.546 [INFO][5274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.559 [INFO][5274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14 Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.570 [INFO][5274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.595 [INFO][5274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.198/26] block=192.168.16.192/26 handle="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.595 [INFO][5274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.198/26] handle="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" host="ip-172-31-22-10" Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.595 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:38.690998 containerd[2023]: 2025-09-12 17:11:38.596 [INFO][5274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.198/26] IPv6=[] ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" HandleID="k8s-pod-network.27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.693126 containerd[2023]: 2025-09-12 17:11:38.604 [INFO][5247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"94f00bb7-79a9-4162-8a58-e6291343a943", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"goldmane-54d579b49d-vszm6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali680037e7221", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:38.693126 containerd[2023]: 2025-09-12 17:11:38.604 [INFO][5247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.198/32] ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.693126 containerd[2023]: 2025-09-12 17:11:38.604 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali680037e7221 ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.693126 containerd[2023]: 2025-09-12 17:11:38.630 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.693126 containerd[2023]: 2025-09-12 17:11:38.646 [INFO][5247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"94f00bb7-79a9-4162-8a58-e6291343a943", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14", Pod:"goldmane-54d579b49d-vszm6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali680037e7221", MAC:"3a:e4:f0:83:06:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:38.693126 containerd[2023]: 2025-09-12 17:11:38.681 [INFO][5247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14" Namespace="calico-system" Pod="goldmane-54d579b49d-vszm6" WorkloadEndpoint="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:38.744517 containerd[2023]: time="2025-09-12T17:11:38.744210387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:38.744727 containerd[2023]: time="2025-09-12T17:11:38.744359463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:38.744727 containerd[2023]: time="2025-09-12T17:11:38.744400527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:38.744727 containerd[2023]: time="2025-09-12T17:11:38.744634791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:38.821780 systemd[1]: Started cri-containerd-27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14.scope - libcontainer container 27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14. Sep 12 17:11:38.843726 containerd[2023]: time="2025-09-12T17:11:38.843296428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nrhn,Uid:ae5d657e-9e31-4c9f-9e66-064135056e24,Namespace:calico-system,Attempt:1,} returns sandbox id \"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592\"" Sep 12 17:11:38.931671 systemd-networkd[1937]: vxlan.calico: Link UP Sep 12 17:11:38.931687 systemd-networkd[1937]: vxlan.calico: Gained carrier Sep 12 17:11:39.011057 systemd-networkd[1937]: cali12385b46600: Gained IPv6LL Sep 12 17:11:39.048162 (udev-worker)[4608]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:39.080401 containerd[2023]: time="2025-09-12T17:11:39.070596973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vszm6,Uid:94f00bb7-79a9-4162-8a58-e6291343a943,Namespace:calico-system,Attempt:1,} returns sandbox id \"27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14\"" Sep 12 17:11:39.200233 systemd-networkd[1937]: cali3033665294a: Gained IPv6LL Sep 12 17:11:39.359720 containerd[2023]: time="2025-09-12T17:11:39.356556542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:39.359892 containerd[2023]: time="2025-09-12T17:11:39.358428254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:11:39.360136 containerd[2023]: time="2025-09-12T17:11:39.360096434Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:39.374272 containerd[2023]: time="2025-09-12T17:11:39.374201114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:39.380218 containerd[2023]: time="2025-09-12T17:11:39.379832318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.679437268s" Sep 12 17:11:39.380936 containerd[2023]: time="2025-09-12T17:11:39.380362130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:11:39.384073 containerd[2023]: time="2025-09-12T17:11:39.384009338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:11:39.393471 containerd[2023]: time="2025-09-12T17:11:39.392341755Z" level=info msg="CreateContainer within sandbox \"f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:11:39.432898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271514839.mount: Deactivated successfully. Sep 12 17:11:39.446005 containerd[2023]: time="2025-09-12T17:11:39.445634199Z" level=info msg="CreateContainer within sandbox \"f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5f508927253ead5dac334a59b63165b0bfe96331c6de614d33b5c8d00b586301\"" Sep 12 17:11:39.448483 containerd[2023]: time="2025-09-12T17:11:39.448394835Z" level=info msg="StartContainer for \"5f508927253ead5dac334a59b63165b0bfe96331c6de614d33b5c8d00b586301\"" Sep 12 17:11:39.465012 containerd[2023]: time="2025-09-12T17:11:39.463951299Z" level=info msg="StopPodSandbox for \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\"" Sep 12 17:11:39.554744 systemd[1]: Started cri-containerd-5f508927253ead5dac334a59b63165b0bfe96331c6de614d33b5c8d00b586301.scope - libcontainer container 5f508927253ead5dac334a59b63165b0bfe96331c6de614d33b5c8d00b586301. Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.649 [INFO][5437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.650 [INFO][5437] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" iface="eth0" netns="/var/run/netns/cni-3a0506a7-7265-6586-12e8-d4aae00c91ff" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.651 [INFO][5437] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" iface="eth0" netns="/var/run/netns/cni-3a0506a7-7265-6586-12e8-d4aae00c91ff" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.653 [INFO][5437] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" iface="eth0" netns="/var/run/netns/cni-3a0506a7-7265-6586-12e8-d4aae00c91ff" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.653 [INFO][5437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.653 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.702 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.702 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.702 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.719 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.719 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.725 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:39.745516 containerd[2023]: 2025-09-12 17:11:39.739 [INFO][5437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:39.747146 containerd[2023]: time="2025-09-12T17:11:39.746845012Z" level=info msg="TearDown network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\" successfully" Sep 12 17:11:39.747146 containerd[2023]: time="2025-09-12T17:11:39.746943928Z" level=info msg="StopPodSandbox for \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\" returns successfully" Sep 12 17:11:39.749363 containerd[2023]: time="2025-09-12T17:11:39.749294116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-hgchj,Uid:5ba4c09f-7a52-43c3-843a-207b43648510,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:11:39.762974 containerd[2023]: time="2025-09-12T17:11:39.762894220Z" level=info msg="StartContainer for \"5f508927253ead5dac334a59b63165b0bfe96331c6de614d33b5c8d00b586301\" returns successfully" Sep 12 17:11:40.031632 systemd-networkd[1937]: cali680037e7221: Gained IPv6LL Sep 12 17:11:40.046276 systemd[1]: run-netns-cni\x2d3a0506a7\x2d7265\x2d6586\x2d12e8\x2dd4aae00c91ff.mount: Deactivated successfully. Sep 12 17:11:40.113704 systemd-networkd[1937]: cali4ce520d62da: Link UP Sep 12 17:11:40.116819 systemd-networkd[1937]: cali4ce520d62da: Gained carrier Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.914 [INFO][5476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0 calico-apiserver-647977b4b6- calico-apiserver 5ba4c09f-7a52-43c3-843a-207b43648510 971 0 2025-09-12 17:11:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647977b4b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-10 calico-apiserver-647977b4b6-hgchj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4ce520d62da [] [] }} ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.914 [INFO][5476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.974 [INFO][5495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" HandleID="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.975 [INFO][5495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" HandleID="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-10", "pod":"calico-apiserver-647977b4b6-hgchj", "timestamp":"2025-09-12 17:11:39.974910401 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.975 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.975 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.976 [INFO][5495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:39.995 [INFO][5495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.007 [INFO][5495] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.015 [INFO][5495] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.029 [INFO][5495] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.068 [INFO][5495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.069 [INFO][5495] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.073 [INFO][5495] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.083 [INFO][5495] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.100 [INFO][5495] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.199/26] block=192.168.16.192/26 handle="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.100 [INFO][5495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.199/26] handle="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" host="ip-172-31-22-10" Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.100 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:40.151820 containerd[2023]: 2025-09-12 17:11:40.101 [INFO][5495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.199/26] IPv6=[] ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" HandleID="k8s-pod-network.f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:40.153092 containerd[2023]: 2025-09-12 17:11:40.104 [INFO][5476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ba4c09f-7a52-43c3-843a-207b43648510", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"calico-apiserver-647977b4b6-hgchj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ce520d62da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:40.153092 containerd[2023]: 2025-09-12 17:11:40.105 [INFO][5476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.199/32] ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:40.153092 containerd[2023]: 2025-09-12 17:11:40.105 [INFO][5476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ce520d62da ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:40.153092 containerd[2023]: 2025-09-12 17:11:40.117 [INFO][5476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:40.153092 containerd[2023]: 2025-09-12 17:11:40.119 [INFO][5476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ba4c09f-7a52-43c3-843a-207b43648510", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e", Pod:"calico-apiserver-647977b4b6-hgchj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ce520d62da", MAC:"ae:a2:5a:e8:c1:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:40.153092 containerd[2023]: 2025-09-12 17:11:40.142 [INFO][5476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e" Namespace="calico-apiserver" Pod="calico-apiserver-647977b4b6-hgchj" WorkloadEndpoint="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:40.214056 containerd[2023]: time="2025-09-12T17:11:40.213686895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:40.214056 containerd[2023]: time="2025-09-12T17:11:40.213862539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:40.214056 containerd[2023]: time="2025-09-12T17:11:40.213937731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:40.214610 containerd[2023]: time="2025-09-12T17:11:40.214492323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:40.272127 systemd[1]: run-containerd-runc-k8s.io-f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e-runc.5QusIW.mount: Deactivated successfully. Sep 12 17:11:40.284794 systemd[1]: Started cri-containerd-f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e.scope - libcontainer container f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e. Sep 12 17:11:40.288431 systemd-networkd[1937]: calibfa914cccf7: Gained IPv6LL Sep 12 17:11:40.395163 containerd[2023]: time="2025-09-12T17:11:40.394999612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647977b4b6-hgchj,Uid:5ba4c09f-7a52-43c3-843a-207b43648510,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e\"" Sep 12 17:11:40.461231 containerd[2023]: time="2025-09-12T17:11:40.461151136Z" level=info msg="StopPodSandbox for \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\"" Sep 12 17:11:40.546516 systemd-networkd[1937]: vxlan.calico: Gained IPv6LL Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.548 [INFO][5599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.551 [INFO][5599] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" iface="eth0" netns="/var/run/netns/cni-a8b34256-8896-a53f-ed7b-5481f2827ffd" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.552 [INFO][5599] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" iface="eth0" netns="/var/run/netns/cni-a8b34256-8896-a53f-ed7b-5481f2827ffd" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.553 [INFO][5599] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" iface="eth0" netns="/var/run/netns/cni-a8b34256-8896-a53f-ed7b-5481f2827ffd" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.553 [INFO][5599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.553 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.604 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.604 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.604 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.616 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.617 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.619 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:40.626116 containerd[2023]: 2025-09-12 17:11:40.622 [INFO][5599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:40.629895 containerd[2023]: time="2025-09-12T17:11:40.626490089Z" level=info msg="TearDown network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\" successfully" Sep 12 17:11:40.629895 containerd[2023]: time="2025-09-12T17:11:40.626529905Z" level=info msg="StopPodSandbox for \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\" returns successfully" Sep 12 17:11:40.629895 containerd[2023]: time="2025-09-12T17:11:40.628736177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dhd5,Uid:6229e9e2-94b6-4d89-8229-2b5bbe16089b,Namespace:kube-system,Attempt:1,}" Sep 12 17:11:40.633001 systemd[1]: run-netns-cni\x2da8b34256\x2d8896\x2da53f\x2ded7b\x2d5481f2827ffd.mount: Deactivated successfully. Sep 12 17:11:40.866938 systemd-networkd[1937]: calid22f7f4d5c4: Link UP Sep 12 17:11:40.872681 systemd-networkd[1937]: calid22f7f4d5c4: Gained carrier Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.711 [INFO][5613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0 coredns-668d6bf9bc- kube-system 6229e9e2-94b6-4d89-8229-2b5bbe16089b 981 0 2025-09-12 17:10:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-10 coredns-668d6bf9bc-7dhd5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid22f7f4d5c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.711 [INFO][5613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.761 [INFO][5624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" HandleID="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.761 [INFO][5624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" HandleID="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-10", "pod":"coredns-668d6bf9bc-7dhd5", "timestamp":"2025-09-12 17:11:40.760987661 +0000 UTC"}, Hostname:"ip-172-31-22-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.761 [INFO][5624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.761 [INFO][5624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.761 [INFO][5624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-10' Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.780 [INFO][5624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.791 [INFO][5624] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.801 [INFO][5624] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.804 [INFO][5624] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.814 [INFO][5624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.814 [INFO][5624] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.818 [INFO][5624] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.829 [INFO][5624] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.852 [INFO][5624] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.16.200/26] block=192.168.16.192/26 handle="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.852 [INFO][5624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.200/26] handle="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" host="ip-172-31-22-10" Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.853 [INFO][5624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:40.914656 containerd[2023]: 2025-09-12 17:11:40.853 [INFO][5624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.200/26] IPv6=[] ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" HandleID="k8s-pod-network.9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.918392 containerd[2023]: 2025-09-12 17:11:40.856 [INFO][5613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6229e9e2-94b6-4d89-8229-2b5bbe16089b", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"", Pod:"coredns-668d6bf9bc-7dhd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid22f7f4d5c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:40.918392 containerd[2023]: 2025-09-12 17:11:40.857 [INFO][5613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.200/32] ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.918392 containerd[2023]: 2025-09-12 17:11:40.857 [INFO][5613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid22f7f4d5c4 ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.918392 containerd[2023]: 2025-09-12 17:11:40.876 [INFO][5613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.918392 containerd[2023]: 2025-09-12 17:11:40.884 [INFO][5613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6229e9e2-94b6-4d89-8229-2b5bbe16089b", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee", Pod:"coredns-668d6bf9bc-7dhd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid22f7f4d5c4", MAC:"da:04:0e:8b:73:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:40.918392 containerd[2023]: 2025-09-12 17:11:40.909 [INFO][5613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-7dhd5" WorkloadEndpoint="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:40.967232 containerd[2023]: time="2025-09-12T17:11:40.966420702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:40.967232 containerd[2023]: time="2025-09-12T17:11:40.966910314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:40.969104 containerd[2023]: time="2025-09-12T17:11:40.966950622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:40.969104 containerd[2023]: time="2025-09-12T17:11:40.968326410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:41.029814 systemd[1]: Started cri-containerd-9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee.scope - libcontainer container 9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee. Sep 12 17:11:41.161438 containerd[2023]: time="2025-09-12T17:11:41.161115543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dhd5,Uid:6229e9e2-94b6-4d89-8229-2b5bbe16089b,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee\"" Sep 12 17:11:41.166715 containerd[2023]: time="2025-09-12T17:11:41.166270299Z" level=info msg="CreateContainer within sandbox \"9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:41.196379 containerd[2023]: time="2025-09-12T17:11:41.196304619Z" level=info msg="CreateContainer within sandbox \"9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"227b7053758ab9343ca68cdf89bbc6b425e0d46e8764d50fe89296ea4fb908a4\"" Sep 12 17:11:41.197792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7501531.mount: Deactivated successfully. Sep 12 17:11:41.201114 containerd[2023]: time="2025-09-12T17:11:41.200319520Z" level=info msg="StartContainer for \"227b7053758ab9343ca68cdf89bbc6b425e0d46e8764d50fe89296ea4fb908a4\"" Sep 12 17:11:41.266395 systemd[1]: Started cri-containerd-227b7053758ab9343ca68cdf89bbc6b425e0d46e8764d50fe89296ea4fb908a4.scope - libcontainer container 227b7053758ab9343ca68cdf89bbc6b425e0d46e8764d50fe89296ea4fb908a4. Sep 12 17:11:41.353723 containerd[2023]: time="2025-09-12T17:11:41.353496796Z" level=info msg="StartContainer for \"227b7053758ab9343ca68cdf89bbc6b425e0d46e8764d50fe89296ea4fb908a4\" returns successfully" Sep 12 17:11:41.825573 systemd-networkd[1937]: cali4ce520d62da: Gained IPv6LL Sep 12 17:11:42.193715 kubelet[3226]: I0912 17:11:42.191665 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7dhd5" podStartSLOduration=52.191640736 podStartE2EDuration="52.191640736s" podCreationTimestamp="2025-09-12 17:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:42.139473328 +0000 UTC m=+57.062603960" watchObservedRunningTime="2025-09-12 17:11:42.191640736 +0000 UTC m=+57.114771344" Sep 12 17:11:42.336704 systemd-networkd[1937]: calid22f7f4d5c4: Gained IPv6LL Sep 12 17:11:43.615867 containerd[2023]: time="2025-09-12T17:11:43.615785072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:43.619576 containerd[2023]: time="2025-09-12T17:11:43.619501604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:11:43.621176 containerd[2023]: time="2025-09-12T17:11:43.621119180Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:43.630030 containerd[2023]: time="2025-09-12T17:11:43.629940752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:43.648547 containerd[2023]: time="2025-09-12T17:11:43.648291224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.26421207s" Sep 12 17:11:43.648547 containerd[2023]: time="2025-09-12T17:11:43.648401696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:11:43.659347 containerd[2023]: time="2025-09-12T17:11:43.659236088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:11:43.698307 containerd[2023]: time="2025-09-12T17:11:43.697901120Z" level=info msg="CreateContainer within sandbox \"af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:11:43.728500 containerd[2023]: time="2025-09-12T17:11:43.728382680Z" level=info msg="CreateContainer within sandbox \"af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0d519fed61c1aa562d458e6b9e5cdc8c83bcabb45ed9ef5085de63e87c4ce9a4\"" Sep 12 17:11:43.732138 containerd[2023]: time="2025-09-12T17:11:43.732072272Z" level=info msg="StartContainer for \"0d519fed61c1aa562d458e6b9e5cdc8c83bcabb45ed9ef5085de63e87c4ce9a4\"" Sep 12 17:11:43.822783 systemd[1]: Started cri-containerd-0d519fed61c1aa562d458e6b9e5cdc8c83bcabb45ed9ef5085de63e87c4ce9a4.scope - libcontainer container 0d519fed61c1aa562d458e6b9e5cdc8c83bcabb45ed9ef5085de63e87c4ce9a4. Sep 12 17:11:43.905808 containerd[2023]: time="2025-09-12T17:11:43.905561889Z" level=info msg="StartContainer for \"0d519fed61c1aa562d458e6b9e5cdc8c83bcabb45ed9ef5085de63e87c4ce9a4\" returns successfully" Sep 12 17:11:44.311199 kubelet[3226]: I0912 17:11:44.311039 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8dc95788b-bgx5f" podStartSLOduration=26.717750723 podStartE2EDuration="32.309948367s" podCreationTimestamp="2025-09-12 17:11:12 +0000 UTC" firstStartedPulling="2025-09-12 17:11:38.06341304 +0000 UTC m=+52.986543660" lastFinishedPulling="2025-09-12 17:11:43.655610696 +0000 UTC m=+58.578741304" observedRunningTime="2025-09-12 17:11:44.170912514 +0000 UTC m=+59.094043134" watchObservedRunningTime="2025-09-12 17:11:44.309948367 +0000 UTC m=+59.233078975" Sep 12 17:11:44.588975 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.16.192:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.16.192:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 8 calie16eead0b86 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 9 cali588c5355d95 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 10 cali12385b46600 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 11 cali3033665294a [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 12 calibfa914cccf7 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 13 cali680037e7221 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 14 vxlan.calico [fe80::6400:d2ff:feea:ced5%10]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 15 cali4ce520d62da [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:11:44.590336 ntpd[1992]: 12 Sep 17:11:44 ntpd[1992]: Listen normally on 16 calid22f7f4d5c4 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:11:44.589142 ntpd[1992]: Listen normally on 8 calie16eead0b86 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:11:44.589241 ntpd[1992]: Listen normally on 9 cali588c5355d95 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 17:11:44.589310 ntpd[1992]: Listen normally on 10 cali12385b46600 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 17:11:44.589378 ntpd[1992]: Listen normally on 11 cali3033665294a [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 17:11:44.589471 ntpd[1992]: Listen normally on 12 calibfa914cccf7 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:11:44.589549 ntpd[1992]: Listen normally on 13 cali680037e7221 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:11:44.589618 ntpd[1992]: Listen normally on 14 vxlan.calico [fe80::6400:d2ff:feea:ced5%10]:123 Sep 12 17:11:44.589686 ntpd[1992]: Listen normally on 15 cali4ce520d62da [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:11:44.589753 ntpd[1992]: Listen normally on 16 calid22f7f4d5c4 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:11:45.413541 containerd[2023]: time="2025-09-12T17:11:45.413480168Z" level=info msg="StopPodSandbox for \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\"" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.537 [WARNING][5817] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ba4c09f-7a52-43c3-843a-207b43648510", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e", Pod:"calico-apiserver-647977b4b6-hgchj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ce520d62da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.538 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.538 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" iface="eth0" netns="" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.538 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.538 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.689 [INFO][5827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.690 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.691 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.726 [WARNING][5827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.726 [INFO][5827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.730 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:45.741340 containerd[2023]: 2025-09-12 17:11:45.733 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:45.741340 containerd[2023]: time="2025-09-12T17:11:45.740203510Z" level=info msg="TearDown network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\" successfully" Sep 12 17:11:45.741340 containerd[2023]: time="2025-09-12T17:11:45.740240614Z" level=info msg="StopPodSandbox for \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\" returns successfully" Sep 12 17:11:45.741340 containerd[2023]: time="2025-09-12T17:11:45.741117898Z" level=info msg="RemovePodSandbox for \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\"" Sep 12 17:11:45.741340 containerd[2023]: time="2025-09-12T17:11:45.741173110Z" level=info msg="Forcibly stopping sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\"" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:45.927 [WARNING][5841] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ba4c09f-7a52-43c3-843a-207b43648510", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e", Pod:"calico-apiserver-647977b4b6-hgchj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ce520d62da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:45.928 [INFO][5841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:45.928 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" iface="eth0" netns="" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:45.928 [INFO][5841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:45.928 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:46.038 [INFO][5853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:46.040 [INFO][5853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:46.041 [INFO][5853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:46.126 [WARNING][5853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:46.127 [INFO][5853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" HandleID="k8s-pod-network.749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--hgchj-eth0" Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:46.146 [INFO][5853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:46.169611 containerd[2023]: 2025-09-12 17:11:46.160 [INFO][5841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770" Sep 12 17:11:46.172545 containerd[2023]: time="2025-09-12T17:11:46.169660148Z" level=info msg="TearDown network for sandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\" successfully" Sep 12 17:11:46.181673 containerd[2023]: time="2025-09-12T17:11:46.181482140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:46.182473 containerd[2023]: time="2025-09-12T17:11:46.182099348Z" level=info msg="RemovePodSandbox \"749df293ce991b441ffe1d5418ab957d69f131883e03c3c4f152c0c190114770\" returns successfully" Sep 12 17:11:46.185043 containerd[2023]: time="2025-09-12T17:11:46.184967468Z" level=info msg="StopPodSandbox for \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\"" Sep 12 17:11:46.447156 systemd[1]: Started sshd@7-172.31.22.10:22-147.75.109.163:56638.service - OpenSSH per-connection server daemon (147.75.109.163:56638). Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.462 [WARNING][5867] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"586bc779-7033-4130-9b97-e99099aaf59c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5", Pod:"coredns-668d6bf9bc-gdmfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie16eead0b86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.466 [INFO][5867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.466 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" iface="eth0" netns="" Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.471 [INFO][5867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.471 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.603 [INFO][5881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.607 [INFO][5881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.607 [INFO][5881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.633 [WARNING][5881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.635 [INFO][5881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.641 [INFO][5881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:46.671772 containerd[2023]: 2025-09-12 17:11:46.653 [INFO][5867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.671772 containerd[2023]: time="2025-09-12T17:11:46.672125339Z" level=info msg="TearDown network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\" successfully" Sep 12 17:11:46.671772 containerd[2023]: time="2025-09-12T17:11:46.672768515Z" level=info msg="StopPodSandbox for \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\" returns successfully" Sep 12 17:11:46.682146 containerd[2023]: time="2025-09-12T17:11:46.681943319Z" level=info msg="RemovePodSandbox for \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\"" Sep 12 17:11:46.682699 containerd[2023]: time="2025-09-12T17:11:46.682583783Z" level=info msg="Forcibly stopping sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\"" Sep 12 17:11:46.707549 sshd[5876]: Accepted publickey for core from 147.75.109.163 port 56638 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:46.718639 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:46.747829 systemd-logind[1999]: New session 8 of user core. Sep 12 17:11:46.753813 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.835 [WARNING][5900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"586bc779-7033-4130-9b97-e99099aaf59c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"0e1536f411a83fe60cc875b4a82a32c671bc206854b7034f0697a90ba7310ce5", Pod:"coredns-668d6bf9bc-gdmfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie16eead0b86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.836 [INFO][5900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.836 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" iface="eth0" netns="" Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.836 [INFO][5900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.836 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.924 [INFO][5908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.925 [INFO][5908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.925 [INFO][5908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.947 [WARNING][5908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.947 [INFO][5908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" HandleID="k8s-pod-network.41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--gdmfq-eth0" Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.953 [INFO][5908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:46.970856 containerd[2023]: 2025-09-12 17:11:46.957 [INFO][5900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a" Sep 12 17:11:46.972281 containerd[2023]: time="2025-09-12T17:11:46.970823964Z" level=info msg="TearDown network for sandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\" successfully" Sep 12 17:11:46.986761 containerd[2023]: time="2025-09-12T17:11:46.986510400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:46.986761 containerd[2023]: time="2025-09-12T17:11:46.986608860Z" level=info msg="RemovePodSandbox \"41d49b9e0491eba0654494bd80d996563a39f69cf722889ae2a7bd83ee46899a\" returns successfully" Sep 12 17:11:46.988348 containerd[2023]: time="2025-09-12T17:11:46.987853260Z" level=info msg="StopPodSandbox for \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\"" Sep 12 17:11:47.166726 sshd[5876]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:47.176891 systemd[1]: sshd@7-172.31.22.10:22-147.75.109.163:56638.service: Deactivated successfully. Sep 12 17:11:47.185230 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:11:47.191961 systemd-logind[1999]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:11:47.195461 systemd-logind[1999]: Removed session 8. Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.175 [WARNING][5929] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6229e9e2-94b6-4d89-8229-2b5bbe16089b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee", Pod:"coredns-668d6bf9bc-7dhd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid22f7f4d5c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.176 [INFO][5929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.176 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" iface="eth0" netns="" Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.176 [INFO][5929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.176 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.254 [INFO][5939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.255 [INFO][5939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.255 [INFO][5939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.272 [WARNING][5939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.273 [INFO][5939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.277 [INFO][5939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:47.290413 containerd[2023]: 2025-09-12 17:11:47.281 [INFO][5929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.291781 containerd[2023]: time="2025-09-12T17:11:47.290487526Z" level=info msg="TearDown network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\" successfully" Sep 12 17:11:47.291781 containerd[2023]: time="2025-09-12T17:11:47.290572822Z" level=info msg="StopPodSandbox for \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\" returns successfully" Sep 12 17:11:47.295608 containerd[2023]: time="2025-09-12T17:11:47.293941138Z" level=info msg="RemovePodSandbox for \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\"" Sep 12 17:11:47.295608 containerd[2023]: time="2025-09-12T17:11:47.294021178Z" level=info msg="Forcibly stopping sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\"" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.481 [WARNING][5954] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6229e9e2-94b6-4d89-8229-2b5bbe16089b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"9d1b6d37030b48eca3edc2587eb15bd64e2e8c38c2cb60905f9b7f27a9a570ee", Pod:"coredns-668d6bf9bc-7dhd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid22f7f4d5c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.482 [INFO][5954] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.482 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" iface="eth0" netns="" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.483 [INFO][5954] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.483 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.540 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.541 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.541 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.565 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.565 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" HandleID="k8s-pod-network.f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Workload="ip--172--31--22--10-k8s-coredns--668d6bf9bc--7dhd5-eth0" Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.571 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:47.580794 containerd[2023]: 2025-09-12 17:11:47.574 [INFO][5954] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc" Sep 12 17:11:47.580794 containerd[2023]: time="2025-09-12T17:11:47.580750799Z" level=info msg="TearDown network for sandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\" successfully" Sep 12 17:11:47.592517 containerd[2023]: time="2025-09-12T17:11:47.592387967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:47.592517 containerd[2023]: time="2025-09-12T17:11:47.592506251Z" level=info msg="RemovePodSandbox \"f309086060c2a7820b61ae5a61b7daab08c6ae60eb7edd81ac7665cb13e0a1fc\" returns successfully" Sep 12 17:11:47.593274 containerd[2023]: time="2025-09-12T17:11:47.593229191Z" level=info msg="StopPodSandbox for \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\"" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.712 [WARNING][5976] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6", Pod:"calico-apiserver-647977b4b6-44ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3033665294a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.712 [INFO][5976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.712 [INFO][5976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" iface="eth0" netns="" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.713 [INFO][5976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.713 [INFO][5976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.770 [INFO][5983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.770 [INFO][5983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.770 [INFO][5983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.806 [WARNING][5983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.806 [INFO][5983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.820 [INFO][5983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:47.840596 containerd[2023]: 2025-09-12 17:11:47.830 [INFO][5976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:47.840596 containerd[2023]: time="2025-09-12T17:11:47.839344392Z" level=info msg="TearDown network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\" successfully" Sep 12 17:11:47.840596 containerd[2023]: time="2025-09-12T17:11:47.839382060Z" level=info msg="StopPodSandbox for \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\" returns successfully" Sep 12 17:11:47.844649 containerd[2023]: time="2025-09-12T17:11:47.843904249Z" level=info msg="RemovePodSandbox for \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\"" Sep 12 17:11:47.844649 containerd[2023]: time="2025-09-12T17:11:47.843959677Z" level=info msg="Forcibly stopping sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\"" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.014 [WARNING][5997] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0", GenerateName:"calico-apiserver-647977b4b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"345c245e-fb46-4bd0-8ae8-b1e67cf9cdc0", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647977b4b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6", Pod:"calico-apiserver-647977b4b6-44ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3033665294a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.014 [INFO][5997] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.014 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" iface="eth0" netns="" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.014 [INFO][5997] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.014 [INFO][5997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.091 [INFO][6004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.091 [INFO][6004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.091 [INFO][6004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.114 [WARNING][6004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.114 [INFO][6004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" HandleID="k8s-pod-network.d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Workload="ip--172--31--22--10-k8s-calico--apiserver--647977b4b6--44ncj-eth0" Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.117 [INFO][6004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:48.128219 containerd[2023]: 2025-09-12 17:11:48.121 [INFO][5997] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65" Sep 12 17:11:48.129919 containerd[2023]: time="2025-09-12T17:11:48.129296470Z" level=info msg="TearDown network for sandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\" successfully" Sep 12 17:11:48.139476 containerd[2023]: time="2025-09-12T17:11:48.139006942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:48.139476 containerd[2023]: time="2025-09-12T17:11:48.139141858Z" level=info msg="RemovePodSandbox \"d0edb487570edb24ebf70b9d44bdbd52d1a32bc0e36c1d72389f745d81264c65\" returns successfully" Sep 12 17:11:48.140528 containerd[2023]: time="2025-09-12T17:11:48.140088466Z" level=info msg="StopPodSandbox for \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\"" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.252 [WARNING][6018] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"94f00bb7-79a9-4162-8a58-e6291343a943", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14", Pod:"goldmane-54d579b49d-vszm6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali680037e7221", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.253 [INFO][6018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.253 [INFO][6018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" iface="eth0" netns="" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.253 [INFO][6018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.253 [INFO][6018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.329 [INFO][6026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.330 [INFO][6026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.331 [INFO][6026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.353 [WARNING][6026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.353 [INFO][6026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.356 [INFO][6026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:48.362434 containerd[2023]: 2025-09-12 17:11:48.359 [INFO][6018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.363848 containerd[2023]: time="2025-09-12T17:11:48.362568131Z" level=info msg="TearDown network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\" successfully" Sep 12 17:11:48.363848 containerd[2023]: time="2025-09-12T17:11:48.362618807Z" level=info msg="StopPodSandbox for \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\" returns successfully" Sep 12 17:11:48.363848 containerd[2023]: time="2025-09-12T17:11:48.363742307Z" level=info msg="RemovePodSandbox for \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\"" Sep 12 17:11:48.363848 containerd[2023]: time="2025-09-12T17:11:48.363810359Z" level=info msg="Forcibly stopping sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\"" Sep 12 17:11:48.367665 containerd[2023]: time="2025-09-12T17:11:48.367438667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:48.372501 containerd[2023]: time="2025-09-12T17:11:48.372266795Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:48.373198 containerd[2023]: time="2025-09-12T17:11:48.373141127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:11:48.378843 containerd[2023]: time="2025-09-12T17:11:48.378573587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:48.381348 containerd[2023]: time="2025-09-12T17:11:48.380412683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.721108399s" Sep 12 17:11:48.381348 containerd[2023]: time="2025-09-12T17:11:48.380523371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:11:48.396026 containerd[2023]: time="2025-09-12T17:11:48.395892167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:11:48.396594 containerd[2023]: time="2025-09-12T17:11:48.396406367Z" level=info msg="CreateContainer within sandbox \"d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:11:48.447366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284325941.mount: Deactivated successfully. Sep 12 17:11:48.450869 containerd[2023]: time="2025-09-12T17:11:48.450330924Z" level=info msg="CreateContainer within sandbox \"d3082b5e0d5debd09ca851fbe900f6f51c01c15dfc678099bfffbd5346ecc1f6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c94a811406539b43c5b682afca3a5d009e60a40d12da7828cb3de02b684bbe6a\"" Sep 12 17:11:48.453317 containerd[2023]: time="2025-09-12T17:11:48.452285988Z" level=info msg="StartContainer for \"c94a811406539b43c5b682afca3a5d009e60a40d12da7828cb3de02b684bbe6a\"" Sep 12 17:11:48.608669 systemd[1]: Started cri-containerd-c94a811406539b43c5b682afca3a5d009e60a40d12da7828cb3de02b684bbe6a.scope - libcontainer container c94a811406539b43c5b682afca3a5d009e60a40d12da7828cb3de02b684bbe6a. Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.519 [WARNING][6045] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"94f00bb7-79a9-4162-8a58-e6291343a943", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14", Pod:"goldmane-54d579b49d-vszm6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali680037e7221", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.520 [INFO][6045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.520 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" iface="eth0" netns="" Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.520 [INFO][6045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.521 [INFO][6045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.636 [INFO][6058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.637 [INFO][6058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.637 [INFO][6058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.650 [WARNING][6058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.650 [INFO][6058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" HandleID="k8s-pod-network.8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Workload="ip--172--31--22--10-k8s-goldmane--54d579b49d--vszm6-eth0" Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.654 [INFO][6058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:48.663191 containerd[2023]: 2025-09-12 17:11:48.658 [INFO][6045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610" Sep 12 17:11:48.663191 containerd[2023]: time="2025-09-12T17:11:48.662883709Z" level=info msg="TearDown network for sandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\" successfully" Sep 12 17:11:48.668706 containerd[2023]: time="2025-09-12T17:11:48.668623297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:48.668869 containerd[2023]: time="2025-09-12T17:11:48.668728369Z" level=info msg="RemovePodSandbox \"8958dbbacafe53d65e713a91edfead67ead26a9ef3a7b8157d03996d19ae2610\" returns successfully" Sep 12 17:11:48.670164 containerd[2023]: time="2025-09-12T17:11:48.670076773Z" level=info msg="StopPodSandbox for \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\"" Sep 12 17:11:48.733964 containerd[2023]: time="2025-09-12T17:11:48.733617625Z" level=info msg="StartContainer for \"c94a811406539b43c5b682afca3a5d009e60a40d12da7828cb3de02b684bbe6a\" returns successfully" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.763 [WARNING][6090] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae5d657e-9e31-4c9f-9e66-064135056e24", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592", Pod:"csi-node-driver-9nrhn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfa914cccf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.764 [INFO][6090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.764 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" iface="eth0" netns="" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.764 [INFO][6090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.764 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.825 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.825 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.825 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.838 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.838 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.841 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:48.849355 containerd[2023]: 2025-09-12 17:11:48.845 [INFO][6090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:48.849355 containerd[2023]: time="2025-09-12T17:11:48.849172669Z" level=info msg="TearDown network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\" successfully" Sep 12 17:11:48.849355 containerd[2023]: time="2025-09-12T17:11:48.849218905Z" level=info msg="StopPodSandbox for \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\" returns successfully" Sep 12 17:11:48.852229 containerd[2023]: time="2025-09-12T17:11:48.849972554Z" level=info msg="RemovePodSandbox for \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\"" Sep 12 17:11:48.852229 containerd[2023]: time="2025-09-12T17:11:48.850021214Z" level=info msg="Forcibly stopping sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\"" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.935 [WARNING][6125] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae5d657e-9e31-4c9f-9e66-064135056e24", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592", Pod:"csi-node-driver-9nrhn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfa914cccf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.935 [INFO][6125] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.935 [INFO][6125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" iface="eth0" netns="" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.935 [INFO][6125] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.935 [INFO][6125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.986 [INFO][6133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.986 [INFO][6133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:48.986 [INFO][6133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:49.001 [WARNING][6133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:49.002 [INFO][6133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" HandleID="k8s-pod-network.70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Workload="ip--172--31--22--10-k8s-csi--node--driver--9nrhn-eth0" Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:49.006 [INFO][6133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.016157 containerd[2023]: 2025-09-12 17:11:49.009 [INFO][6125] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6" Sep 12 17:11:49.017017 containerd[2023]: time="2025-09-12T17:11:49.016209250Z" level=info msg="TearDown network for sandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\" successfully" Sep 12 17:11:49.022702 containerd[2023]: time="2025-09-12T17:11:49.022604158Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:49.022854 containerd[2023]: time="2025-09-12T17:11:49.022775254Z" level=info msg="RemovePodSandbox \"70c373821fee48f63553dae70660c4e12025f0b09019f8002d9cbcaa596ee0c6\" returns successfully" Sep 12 17:11:49.023490 containerd[2023]: time="2025-09-12T17:11:49.023410462Z" level=info msg="StopPodSandbox for \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\"" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.093 [WARNING][6152] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.094 [INFO][6152] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.094 [INFO][6152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" iface="eth0" netns="" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.094 [INFO][6152] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.095 [INFO][6152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.138 [INFO][6159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.138 [INFO][6159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.138 [INFO][6159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.152 [WARNING][6159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.153 [INFO][6159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.155 [INFO][6159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.160662 containerd[2023]: 2025-09-12 17:11:49.157 [INFO][6152] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.162105 containerd[2023]: time="2025-09-12T17:11:49.160727363Z" level=info msg="TearDown network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\" successfully" Sep 12 17:11:49.162105 containerd[2023]: time="2025-09-12T17:11:49.160764527Z" level=info msg="StopPodSandbox for \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\" returns successfully" Sep 12 17:11:49.162105 containerd[2023]: time="2025-09-12T17:11:49.161806691Z" level=info msg="RemovePodSandbox for \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\"" Sep 12 17:11:49.162105 containerd[2023]: time="2025-09-12T17:11:49.162039323Z" level=info msg="Forcibly stopping sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\"" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.295 [WARNING][6173] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" WorkloadEndpoint="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.295 [INFO][6173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.295 [INFO][6173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" iface="eth0" netns="" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.295 [INFO][6173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.295 [INFO][6173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.336 [INFO][6182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.336 [INFO][6182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.336 [INFO][6182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.353 [WARNING][6182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.353 [INFO][6182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" HandleID="k8s-pod-network.0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Workload="ip--172--31--22--10-k8s-whisker--57b5cb98ff--kxfbh-eth0" Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.357 [INFO][6182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.363931 containerd[2023]: 2025-09-12 17:11:49.360 [INFO][6173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a" Sep 12 17:11:49.363931 containerd[2023]: time="2025-09-12T17:11:49.363894060Z" level=info msg="TearDown network for sandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\" successfully" Sep 12 17:11:49.369949 containerd[2023]: time="2025-09-12T17:11:49.369877812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:49.370095 containerd[2023]: time="2025-09-12T17:11:49.370018848Z" level=info msg="RemovePodSandbox \"0e1e6fd3482cec6c2881c849d8587da1ad9a3ff2e947012c898d3dff5cc3dc7a\" returns successfully" Sep 12 17:11:49.371115 containerd[2023]: time="2025-09-12T17:11:49.371063928Z" level=info msg="StopPodSandbox for \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\"" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.445 [WARNING][6196] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0", GenerateName:"calico-kube-controllers-8dc95788b-", Namespace:"calico-system", SelfLink:"", UID:"175bb5e7-5b87-45af-af71-37ac118306c2", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8dc95788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754", Pod:"calico-kube-controllers-8dc95788b-bgx5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12385b46600", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.445 [INFO][6196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.445 [INFO][6196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" iface="eth0" netns="" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.445 [INFO][6196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.446 [INFO][6196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.488 [INFO][6204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.489 [INFO][6204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.489 [INFO][6204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.506 [WARNING][6204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.506 [INFO][6204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.509 [INFO][6204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.515197 containerd[2023]: 2025-09-12 17:11:49.512 [INFO][6196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.516721 containerd[2023]: time="2025-09-12T17:11:49.515272333Z" level=info msg="TearDown network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\" successfully" Sep 12 17:11:49.516721 containerd[2023]: time="2025-09-12T17:11:49.515335969Z" level=info msg="StopPodSandbox for \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\" returns successfully" Sep 12 17:11:49.516721 containerd[2023]: time="2025-09-12T17:11:49.516091969Z" level=info msg="RemovePodSandbox for \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\"" Sep 12 17:11:49.516721 containerd[2023]: time="2025-09-12T17:11:49.516138001Z" level=info msg="Forcibly stopping sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\"" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.596 [WARNING][6218] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0", GenerateName:"calico-kube-controllers-8dc95788b-", Namespace:"calico-system", SelfLink:"", UID:"175bb5e7-5b87-45af-af71-37ac118306c2", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8dc95788b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-10", ContainerID:"af37236bc207e395dae53cb7b9e5e39cd3cbeee16b6431bbcc5d0f6f4281a754", Pod:"calico-kube-controllers-8dc95788b-bgx5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali12385b46600", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.597 [INFO][6218] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.597 [INFO][6218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" iface="eth0" netns="" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.597 [INFO][6218] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.597 [INFO][6218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.645 [INFO][6225] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.645 [INFO][6225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.646 [INFO][6225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.661 [WARNING][6225] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.661 [INFO][6225] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" HandleID="k8s-pod-network.74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Workload="ip--172--31--22--10-k8s-calico--kube--controllers--8dc95788b--bgx5f-eth0" Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.664 [INFO][6225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.671801 containerd[2023]: 2025-09-12 17:11:49.668 [INFO][6218] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98" Sep 12 17:11:49.672713 containerd[2023]: time="2025-09-12T17:11:49.671814698Z" level=info msg="TearDown network for sandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\" successfully" Sep 12 17:11:49.679040 containerd[2023]: time="2025-09-12T17:11:49.678926570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:49.679210 containerd[2023]: time="2025-09-12T17:11:49.679076342Z" level=info msg="RemovePodSandbox \"74c8849ce32c1404ade1f65ab10054cdf81facf4345de487e1f5a188ebb26e98\" returns successfully" Sep 12 17:11:50.121500 containerd[2023]: time="2025-09-12T17:11:50.120126204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:50.123386 containerd[2023]: time="2025-09-12T17:11:50.123058056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:11:50.125849 containerd[2023]: time="2025-09-12T17:11:50.125242608Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:50.131056 containerd[2023]: time="2025-09-12T17:11:50.131000880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:50.133179 containerd[2023]: time="2025-09-12T17:11:50.133112028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.737053793s" Sep 12 17:11:50.133323 containerd[2023]: time="2025-09-12T17:11:50.133218132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:11:50.137266 containerd[2023]: time="2025-09-12T17:11:50.136742364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:11:50.140983 containerd[2023]: time="2025-09-12T17:11:50.140928528Z" level=info msg="CreateContainer within sandbox \"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:11:50.182940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987956648.mount: Deactivated successfully. Sep 12 17:11:50.186857 containerd[2023]: time="2025-09-12T17:11:50.186802236Z" level=info msg="CreateContainer within sandbox \"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0521a59f2a7204ed9aa5f5585c91ced779fb26641fcfc75accbc592927808f01\"" Sep 12 17:11:50.192104 containerd[2023]: time="2025-09-12T17:11:50.191827956Z" level=info msg="StartContainer for \"0521a59f2a7204ed9aa5f5585c91ced779fb26641fcfc75accbc592927808f01\"" Sep 12 17:11:50.214293 kubelet[3226]: I0912 17:11:50.213722 3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:50.291753 systemd[1]: Started cri-containerd-0521a59f2a7204ed9aa5f5585c91ced779fb26641fcfc75accbc592927808f01.scope - libcontainer container 0521a59f2a7204ed9aa5f5585c91ced779fb26641fcfc75accbc592927808f01. Sep 12 17:11:50.399725 containerd[2023]: time="2025-09-12T17:11:50.399369229Z" level=info msg="StartContainer for \"0521a59f2a7204ed9aa5f5585c91ced779fb26641fcfc75accbc592927808f01\" returns successfully" Sep 12 17:11:52.216021 systemd[1]: Started sshd@8-172.31.22.10:22-147.75.109.163:53504.service - OpenSSH per-connection server daemon (147.75.109.163:53504). Sep 12 17:11:52.407340 sshd[6281]: Accepted publickey for core from 147.75.109.163 port 53504 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:52.410930 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:52.421989 systemd-logind[1999]: New session 9 of user core. Sep 12 17:11:52.429933 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:11:52.759685 sshd[6281]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:52.767750 systemd[1]: sshd@8-172.31.22.10:22-147.75.109.163:53504.service: Deactivated successfully. Sep 12 17:11:52.771833 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:11:52.774730 systemd-logind[1999]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:11:52.781229 systemd-logind[1999]: Removed session 9. Sep 12 17:11:54.237335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360606063.mount: Deactivated successfully. Sep 12 17:11:54.944561 containerd[2023]: time="2025-09-12T17:11:54.944496956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:54.946236 containerd[2023]: time="2025-09-12T17:11:54.946182572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:11:54.949502 containerd[2023]: time="2025-09-12T17:11:54.948069056Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:54.952701 containerd[2023]: time="2025-09-12T17:11:54.952615664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:54.955509 containerd[2023]: time="2025-09-12T17:11:54.954305348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.81750212s" Sep 12 17:11:54.955509 containerd[2023]: time="2025-09-12T17:11:54.954368120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:11:54.959153 containerd[2023]: time="2025-09-12T17:11:54.959076536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:11:54.962366 containerd[2023]: time="2025-09-12T17:11:54.961750652Z" level=info msg="CreateContainer within sandbox \"27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:11:54.986754 containerd[2023]: time="2025-09-12T17:11:54.986683124Z" level=info msg="CreateContainer within sandbox \"27973bf3af949106fe3c35089fd8e15ee6ea23ac3db13bea1a5c8fb456bafc14\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"54b29a7d8977ccbf2a5f980dad65683243f62d0da074e03144d2ac64d1f31b83\"" Sep 12 17:11:54.990074 containerd[2023]: time="2025-09-12T17:11:54.989373080Z" level=info msg="StartContainer for \"54b29a7d8977ccbf2a5f980dad65683243f62d0da074e03144d2ac64d1f31b83\"" Sep 12 17:11:55.070289 systemd[1]: run-containerd-runc-k8s.io-54b29a7d8977ccbf2a5f980dad65683243f62d0da074e03144d2ac64d1f31b83-runc.zKpoQD.mount: Deactivated successfully. Sep 12 17:11:55.086794 systemd[1]: Started cri-containerd-54b29a7d8977ccbf2a5f980dad65683243f62d0da074e03144d2ac64d1f31b83.scope - libcontainer container 54b29a7d8977ccbf2a5f980dad65683243f62d0da074e03144d2ac64d1f31b83. Sep 12 17:11:55.168691 containerd[2023]: time="2025-09-12T17:11:55.168629573Z" level=info msg="StartContainer for \"54b29a7d8977ccbf2a5f980dad65683243f62d0da074e03144d2ac64d1f31b83\" returns successfully" Sep 12 17:11:55.284477 kubelet[3226]: I0912 17:11:55.281887 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-647977b4b6-44ncj" podStartSLOduration=41.141713215 podStartE2EDuration="51.281863013s" podCreationTimestamp="2025-09-12 17:11:04 +0000 UTC" firstStartedPulling="2025-09-12 17:11:38.248921533 +0000 UTC m=+53.172052153" lastFinishedPulling="2025-09-12 17:11:48.389071343 +0000 UTC m=+63.312201951" observedRunningTime="2025-09-12 17:11:49.230477723 +0000 UTC m=+64.153608355" watchObservedRunningTime="2025-09-12 17:11:55.281863013 +0000 UTC m=+70.204993633" Sep 12 17:11:55.295062 kubelet[3226]: I0912 17:11:55.289339 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-vszm6" podStartSLOduration=28.407352698 podStartE2EDuration="44.289313489s" podCreationTimestamp="2025-09-12 17:11:11 +0000 UTC" firstStartedPulling="2025-09-12 17:11:39.074816761 +0000 UTC m=+53.997947381" lastFinishedPulling="2025-09-12 17:11:54.956777564 +0000 UTC m=+69.879908172" observedRunningTime="2025-09-12 17:11:55.281006057 +0000 UTC m=+70.204136701" watchObservedRunningTime="2025-09-12 17:11:55.289313489 +0000 UTC m=+70.212444217" Sep 12 17:11:57.573506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537892066.mount: Deactivated successfully. Sep 12 17:11:57.599133 containerd[2023]: time="2025-09-12T17:11:57.599064633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:57.601554 containerd[2023]: time="2025-09-12T17:11:57.601497429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:11:57.604582 containerd[2023]: time="2025-09-12T17:11:57.602490525Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:57.608173 containerd[2023]: time="2025-09-12T17:11:57.608117853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:57.609848 containerd[2023]: time="2025-09-12T17:11:57.609785673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.650639177s" Sep 12 17:11:57.609973 containerd[2023]: time="2025-09-12T17:11:57.609847389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:11:57.612906 containerd[2023]: time="2025-09-12T17:11:57.612812709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:11:57.619017 containerd[2023]: time="2025-09-12T17:11:57.618946377Z" level=info msg="CreateContainer within sandbox \"f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:11:57.644191 containerd[2023]: time="2025-09-12T17:11:57.643832109Z" level=info msg="CreateContainer within sandbox \"f31cc31d0f8459f1941578f4553819eebb900e623e4d320c405f28995f279cb7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0792de4443335242e1fc7f48e7b73b0f0244d35825f13698ba63136e21d01fe6\"" Sep 12 17:11:57.646279 containerd[2023]: time="2025-09-12T17:11:57.645400533Z" level=info msg="StartContainer for \"0792de4443335242e1fc7f48e7b73b0f0244d35825f13698ba63136e21d01fe6\"" Sep 12 17:11:57.705946 systemd[1]: Started cri-containerd-0792de4443335242e1fc7f48e7b73b0f0244d35825f13698ba63136e21d01fe6.scope - libcontainer container 0792de4443335242e1fc7f48e7b73b0f0244d35825f13698ba63136e21d01fe6. Sep 12 17:11:57.778501 containerd[2023]: time="2025-09-12T17:11:57.778409290Z" level=info msg="StartContainer for \"0792de4443335242e1fc7f48e7b73b0f0244d35825f13698ba63136e21d01fe6\" returns successfully" Sep 12 17:11:57.806046 systemd[1]: Started sshd@9-172.31.22.10:22-147.75.109.163:53510.service - OpenSSH per-connection server daemon (147.75.109.163:53510). Sep 12 17:11:57.998931 sshd[6456]: Accepted publickey for core from 147.75.109.163 port 53510 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:58.003180 sshd[6456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:58.013522 systemd-logind[1999]: New session 10 of user core. Sep 12 17:11:58.022792 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:11:58.070455 containerd[2023]: time="2025-09-12T17:11:58.070371487Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:58.071466 containerd[2023]: time="2025-09-12T17:11:58.071391007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:11:58.076786 containerd[2023]: time="2025-09-12T17:11:58.076320043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 463.441226ms" Sep 12 17:11:58.076786 containerd[2023]: time="2025-09-12T17:11:58.076419763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:11:58.082488 containerd[2023]: time="2025-09-12T17:11:58.081193195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:11:58.085163 containerd[2023]: time="2025-09-12T17:11:58.084755743Z" level=info msg="CreateContainer within sandbox \"f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:11:58.105936 containerd[2023]: time="2025-09-12T17:11:58.105855391Z" level=info msg="CreateContainer within sandbox \"f9fa3f673ef987e8fa7fbf05fe886637e273d9348eab96c5b6a9f74f48f1a23e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7be1e8883a3150caf00627326feb0e79bd8bb8ddc5ad55b17d187573b555e4e9\"" Sep 12 17:11:58.108593 containerd[2023]: time="2025-09-12T17:11:58.108543295Z" level=info msg="StartContainer for \"7be1e8883a3150caf00627326feb0e79bd8bb8ddc5ad55b17d187573b555e4e9\"" Sep 12 17:11:58.176013 systemd[1]: Started cri-containerd-7be1e8883a3150caf00627326feb0e79bd8bb8ddc5ad55b17d187573b555e4e9.scope - libcontainer container 7be1e8883a3150caf00627326feb0e79bd8bb8ddc5ad55b17d187573b555e4e9. Sep 12 17:11:58.282466 containerd[2023]: time="2025-09-12T17:11:58.282209660Z" level=info msg="StartContainer for \"7be1e8883a3150caf00627326feb0e79bd8bb8ddc5ad55b17d187573b555e4e9\" returns successfully" Sep 12 17:11:58.388762 sshd[6456]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:58.396636 systemd-logind[1999]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:11:58.398195 systemd[1]: sshd@9-172.31.22.10:22-147.75.109.163:53510.service: Deactivated successfully. Sep 12 17:11:58.403810 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:11:58.410790 systemd-logind[1999]: Removed session 10. Sep 12 17:11:58.439634 systemd[1]: Started sshd@10-172.31.22.10:22-147.75.109.163:53514.service - OpenSSH per-connection server daemon (147.75.109.163:53514). Sep 12 17:11:58.635986 sshd[6511]: Accepted publickey for core from 147.75.109.163 port 53514 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:58.639651 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:58.648793 systemd-logind[1999]: New session 11 of user core. Sep 12 17:11:58.656713 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:11:59.098584 sshd[6511]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:59.106798 systemd-logind[1999]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:11:59.107181 systemd[1]: sshd@10-172.31.22.10:22-147.75.109.163:53514.service: Deactivated successfully. Sep 12 17:11:59.118588 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:11:59.174539 systemd-logind[1999]: Removed session 11. Sep 12 17:11:59.182675 systemd[1]: Started sshd@11-172.31.22.10:22-147.75.109.163:53524.service - OpenSSH per-connection server daemon (147.75.109.163:53524). Sep 12 17:11:59.319207 kubelet[3226]: I0912 17:11:59.319040 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5bb7568d94-w2sxb" podStartSLOduration=5.405606495 podStartE2EDuration="25.319012978s" podCreationTimestamp="2025-09-12 17:11:34 +0000 UTC" firstStartedPulling="2025-09-12 17:11:37.699187286 +0000 UTC m=+52.622317906" lastFinishedPulling="2025-09-12 17:11:57.612593769 +0000 UTC m=+72.535724389" observedRunningTime="2025-09-12 17:11:58.323673813 +0000 UTC m=+73.246804457" watchObservedRunningTime="2025-09-12 17:11:59.319012978 +0000 UTC m=+74.242143598" Sep 12 17:11:59.400538 sshd[6526]: Accepted publickey for core from 147.75.109.163 port 53524 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:59.402288 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:59.416205 systemd-logind[1999]: New session 12 of user core. Sep 12 17:11:59.424793 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:11:59.843660 sshd[6526]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:59.855742 systemd[1]: sshd@11-172.31.22.10:22-147.75.109.163:53524.service: Deactivated successfully. Sep 12 17:11:59.864525 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:11:59.876706 systemd-logind[1999]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:11:59.883973 systemd-logind[1999]: Removed session 12. Sep 12 17:12:00.180356 containerd[2023]: time="2025-09-12T17:12:00.178862458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:00.182228 containerd[2023]: time="2025-09-12T17:12:00.182163970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:12:00.184509 containerd[2023]: time="2025-09-12T17:12:00.184388314Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:00.190500 containerd[2023]: time="2025-09-12T17:12:00.190416298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:00.192681 containerd[2023]: time="2025-09-12T17:12:00.192632122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.111335295s" Sep 12 17:12:00.193319 containerd[2023]: time="2025-09-12T17:12:00.192843694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:12:00.199638 containerd[2023]: time="2025-09-12T17:12:00.199586338Z" level=info msg="CreateContainer within sandbox \"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:12:00.237888 containerd[2023]: time="2025-09-12T17:12:00.237834298Z" level=info msg="CreateContainer within sandbox \"656953cd1f7d1f21ddb0f660bda2b29ad8e3778f4fe25bed8cc8eb268f3f9592\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"217676cf9494e37c02566236f9d1b99b096e3fe62419a0da609ba03565bf00e5\"" Sep 12 17:12:00.247522 containerd[2023]: time="2025-09-12T17:12:00.240881914Z" level=info msg="StartContainer for \"217676cf9494e37c02566236f9d1b99b096e3fe62419a0da609ba03565bf00e5\"" Sep 12 17:12:00.251952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174729997.mount: Deactivated successfully. Sep 12 17:12:00.328391 kubelet[3226]: I0912 17:12:00.328339 3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:12:00.369921 systemd[1]: Started cri-containerd-217676cf9494e37c02566236f9d1b99b096e3fe62419a0da609ba03565bf00e5.scope - libcontainer container 217676cf9494e37c02566236f9d1b99b096e3fe62419a0da609ba03565bf00e5. Sep 12 17:12:00.446115 containerd[2023]: time="2025-09-12T17:12:00.445793867Z" level=info msg="StartContainer for \"217676cf9494e37c02566236f9d1b99b096e3fe62419a0da609ba03565bf00e5\" returns successfully" Sep 12 17:12:00.691283 kubelet[3226]: I0912 17:12:00.691224 3226 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:12:00.691283 kubelet[3226]: I0912 17:12:00.691287 3226 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:12:01.359134 kubelet[3226]: I0912 17:12:01.359042 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-647977b4b6-hgchj" podStartSLOduration=39.676862001 podStartE2EDuration="57.35901888s" podCreationTimestamp="2025-09-12 17:11:04 +0000 UTC" firstStartedPulling="2025-09-12 17:11:40.397525264 +0000 UTC m=+55.320655872" lastFinishedPulling="2025-09-12 17:11:58.079682119 +0000 UTC m=+73.002812751" observedRunningTime="2025-09-12 17:11:59.321728782 +0000 UTC m=+74.244859438" watchObservedRunningTime="2025-09-12 17:12:01.35901888 +0000 UTC m=+76.282149500" Sep 12 17:12:04.025859 kubelet[3226]: I0912 17:12:04.025732 3226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9nrhn" podStartSLOduration=30.676404583 podStartE2EDuration="52.025703833s" podCreationTimestamp="2025-09-12 17:11:12 +0000 UTC" firstStartedPulling="2025-09-12 17:11:38.846063748 +0000 UTC m=+53.769194380" lastFinishedPulling="2025-09-12 17:12:00.19536301 +0000 UTC m=+75.118493630" observedRunningTime="2025-09-12 17:12:01.3629385 +0000 UTC m=+76.286069132" watchObservedRunningTime="2025-09-12 17:12:04.025703833 +0000 UTC m=+78.948834453" Sep 12 17:12:04.642862 kubelet[3226]: I0912 17:12:04.642333 3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:12:04.885883 systemd[1]: Started sshd@12-172.31.22.10:22-147.75.109.163:45790.service - OpenSSH per-connection server daemon (147.75.109.163:45790). Sep 12 17:12:05.071116 sshd[6614]: Accepted publickey for core from 147.75.109.163 port 45790 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:05.077751 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.095341 systemd-logind[1999]: New session 13 of user core. Sep 12 17:12:05.103786 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:12:05.398849 sshd[6614]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:05.406125 systemd[1]: sshd@12-172.31.22.10:22-147.75.109.163:45790.service: Deactivated successfully. Sep 12 17:12:05.413677 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:12:05.421150 systemd-logind[1999]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:12:05.423399 systemd-logind[1999]: Removed session 13. Sep 12 17:12:10.144509 kubelet[3226]: I0912 17:12:10.144296 3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:12:10.446226 systemd[1]: Started sshd@13-172.31.22.10:22-147.75.109.163:58398.service - OpenSSH per-connection server daemon (147.75.109.163:58398). Sep 12 17:12:10.621022 sshd[6637]: Accepted publickey for core from 147.75.109.163 port 58398 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:10.624113 sshd[6637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:10.634069 systemd-logind[1999]: New session 14 of user core. Sep 12 17:12:10.644731 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:12:10.894879 sshd[6637]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:10.901985 systemd[1]: sshd@13-172.31.22.10:22-147.75.109.163:58398.service: Deactivated successfully. Sep 12 17:12:10.902620 systemd-logind[1999]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:12:10.909031 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:12:10.913772 systemd-logind[1999]: Removed session 14. Sep 12 17:12:15.939015 systemd[1]: Started sshd@14-172.31.22.10:22-147.75.109.163:58400.service - OpenSSH per-connection server daemon (147.75.109.163:58400). Sep 12 17:12:16.133494 sshd[6670]: Accepted publickey for core from 147.75.109.163 port 58400 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:16.169158 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:16.183709 systemd-logind[1999]: New session 15 of user core. Sep 12 17:12:16.191744 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:12:16.478833 sshd[6670]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:16.486881 systemd[1]: sshd@14-172.31.22.10:22-147.75.109.163:58400.service: Deactivated successfully. Sep 12 17:12:16.491980 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:12:16.496298 systemd-logind[1999]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:12:16.499143 systemd-logind[1999]: Removed session 15. Sep 12 17:12:21.518025 systemd[1]: Started sshd@15-172.31.22.10:22-147.75.109.163:53994.service - OpenSSH per-connection server daemon (147.75.109.163:53994). Sep 12 17:12:21.700987 sshd[6691]: Accepted publickey for core from 147.75.109.163 port 53994 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:21.703850 sshd[6691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:21.712009 systemd-logind[1999]: New session 16 of user core. Sep 12 17:12:21.722719 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:12:21.974699 sshd[6691]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:21.981732 systemd[1]: sshd@15-172.31.22.10:22-147.75.109.163:53994.service: Deactivated successfully. Sep 12 17:12:21.986508 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:12:21.989593 systemd-logind[1999]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:12:21.991538 systemd-logind[1999]: Removed session 16. Sep 12 17:12:22.015232 systemd[1]: Started sshd@16-172.31.22.10:22-147.75.109.163:54006.service - OpenSSH per-connection server daemon (147.75.109.163:54006). Sep 12 17:12:22.194905 sshd[6706]: Accepted publickey for core from 147.75.109.163 port 54006 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:22.197653 sshd[6706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:22.205162 systemd-logind[1999]: New session 17 of user core. Sep 12 17:12:22.216779 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:12:24.748161 sshd[6706]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:24.754227 systemd[1]: sshd@16-172.31.22.10:22-147.75.109.163:54006.service: Deactivated successfully. Sep 12 17:12:24.759159 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:12:24.763557 systemd-logind[1999]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:12:24.765388 systemd-logind[1999]: Removed session 17. Sep 12 17:12:24.784032 systemd[1]: Started sshd@17-172.31.22.10:22-147.75.109.163:54008.service - OpenSSH per-connection server daemon (147.75.109.163:54008). Sep 12 17:12:24.961354 sshd[6717]: Accepted publickey for core from 147.75.109.163 port 54008 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:24.964487 sshd[6717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:24.972785 systemd-logind[1999]: New session 18 of user core. Sep 12 17:12:24.986732 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:12:26.388221 sshd[6717]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:26.401038 systemd[1]: sshd@17-172.31.22.10:22-147.75.109.163:54008.service: Deactivated successfully. Sep 12 17:12:26.408017 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:12:26.411379 systemd-logind[1999]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:12:26.440051 systemd[1]: Started sshd@18-172.31.22.10:22-147.75.109.163:54018.service - OpenSSH per-connection server daemon (147.75.109.163:54018). Sep 12 17:12:26.445588 systemd-logind[1999]: Removed session 18. Sep 12 17:12:26.668198 sshd[6735]: Accepted publickey for core from 147.75.109.163 port 54018 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:26.671182 sshd[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:26.680200 systemd-logind[1999]: New session 19 of user core. Sep 12 17:12:26.685708 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:12:27.279180 sshd[6735]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:27.291696 systemd[1]: sshd@18-172.31.22.10:22-147.75.109.163:54018.service: Deactivated successfully. Sep 12 17:12:27.316907 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:12:27.333634 systemd-logind[1999]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:12:27.345351 systemd[1]: Started sshd@19-172.31.22.10:22-147.75.109.163:54026.service - OpenSSH per-connection server daemon (147.75.109.163:54026). Sep 12 17:12:27.352820 systemd-logind[1999]: Removed session 19. Sep 12 17:12:27.535265 sshd[6766]: Accepted publickey for core from 147.75.109.163 port 54026 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:27.538631 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:27.550542 systemd-logind[1999]: New session 20 of user core. Sep 12 17:12:27.556024 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:12:27.809276 sshd[6766]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:27.817383 systemd[1]: sshd@19-172.31.22.10:22-147.75.109.163:54026.service: Deactivated successfully. Sep 12 17:12:27.821589 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:12:27.823896 systemd-logind[1999]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:12:27.826909 systemd-logind[1999]: Removed session 20. Sep 12 17:12:32.852940 systemd[1]: Started sshd@20-172.31.22.10:22-147.75.109.163:55308.service - OpenSSH per-connection server daemon (147.75.109.163:55308). Sep 12 17:12:33.024944 sshd[6785]: Accepted publickey for core from 147.75.109.163 port 55308 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:33.027733 sshd[6785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:33.036675 systemd-logind[1999]: New session 21 of user core. Sep 12 17:12:33.044914 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:12:33.370970 sshd[6785]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:33.380320 systemd[1]: sshd@20-172.31.22.10:22-147.75.109.163:55308.service: Deactivated successfully. Sep 12 17:12:33.387247 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:12:33.393834 systemd-logind[1999]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:12:33.397280 systemd-logind[1999]: Removed session 21. Sep 12 17:12:38.414143 systemd[1]: Started sshd@21-172.31.22.10:22-147.75.109.163:55322.service - OpenSSH per-connection server daemon (147.75.109.163:55322). Sep 12 17:12:38.603214 sshd[6862]: Accepted publickey for core from 147.75.109.163 port 55322 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:38.605998 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:38.614818 systemd-logind[1999]: New session 22 of user core. Sep 12 17:12:38.620733 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:12:38.872971 sshd[6862]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:38.879765 systemd[1]: sshd@21-172.31.22.10:22-147.75.109.163:55322.service: Deactivated successfully. Sep 12 17:12:38.885340 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:12:38.887545 systemd-logind[1999]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:12:38.890314 systemd-logind[1999]: Removed session 22. Sep 12 17:12:43.919077 systemd[1]: Started sshd@22-172.31.22.10:22-147.75.109.163:41112.service - OpenSSH per-connection server daemon (147.75.109.163:41112). Sep 12 17:12:44.139437 sshd[6876]: Accepted publickey for core from 147.75.109.163 port 41112 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:44.147706 sshd[6876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:44.162616 systemd-logind[1999]: New session 23 of user core. Sep 12 17:12:44.174760 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:12:44.473084 sshd[6876]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:44.484304 systemd[1]: sshd@22-172.31.22.10:22-147.75.109.163:41112.service: Deactivated successfully. Sep 12 17:12:44.491104 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:12:44.494210 systemd-logind[1999]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:12:44.499539 systemd-logind[1999]: Removed session 23. Sep 12 17:12:49.510979 systemd[1]: Started sshd@23-172.31.22.10:22-147.75.109.163:41114.service - OpenSSH per-connection server daemon (147.75.109.163:41114). Sep 12 17:12:49.691512 sshd[6909]: Accepted publickey for core from 147.75.109.163 port 41114 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:49.695408 sshd[6909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:49.703644 systemd-logind[1999]: New session 24 of user core. Sep 12 17:12:49.710743 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:12:49.988864 sshd[6909]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:49.999380 systemd[1]: sshd@23-172.31.22.10:22-147.75.109.163:41114.service: Deactivated successfully. Sep 12 17:12:50.009273 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:12:50.019079 systemd-logind[1999]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:12:50.023242 systemd-logind[1999]: Removed session 24. Sep 12 17:12:55.029971 systemd[1]: Started sshd@24-172.31.22.10:22-147.75.109.163:38022.service - OpenSSH per-connection server daemon (147.75.109.163:38022). Sep 12 17:12:55.219197 sshd[6924]: Accepted publickey for core from 147.75.109.163 port 38022 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:55.223157 sshd[6924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:55.235538 systemd-logind[1999]: New session 25 of user core. Sep 12 17:12:55.242765 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:12:55.541298 sshd[6924]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:55.549829 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:12:55.553753 systemd[1]: sshd@24-172.31.22.10:22-147.75.109.163:38022.service: Deactivated successfully. Sep 12 17:12:55.567804 systemd-logind[1999]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:12:55.575739 systemd-logind[1999]: Removed session 25. Sep 12 17:12:57.327541 systemd[1]: run-containerd-runc-k8s.io-54b29a7d8977ccbf2a5f980dad65683243f62d0da074e03144d2ac64d1f31b83-runc.6SyC7M.mount: Deactivated successfully. Sep 12 17:13:00.581714 systemd[1]: Started sshd@25-172.31.22.10:22-147.75.109.163:54782.service - OpenSSH per-connection server daemon (147.75.109.163:54782). Sep 12 17:13:00.774910 sshd[6962]: Accepted publickey for core from 147.75.109.163 port 54782 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:00.779268 sshd[6962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:00.797540 systemd-logind[1999]: New session 26 of user core. Sep 12 17:13:00.804284 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:13:01.126761 sshd[6962]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:01.133317 systemd[1]: sshd@25-172.31.22.10:22-147.75.109.163:54782.service: Deactivated successfully. Sep 12 17:13:01.139087 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:13:01.141793 systemd-logind[1999]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:13:01.144489 systemd-logind[1999]: Removed session 26. Sep 12 17:13:14.856504 systemd[1]: cri-containerd-4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125.scope: Deactivated successfully. Sep 12 17:13:14.857011 systemd[1]: cri-containerd-4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125.scope: Consumed 19.007s CPU time. Sep 12 17:13:14.904409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125-rootfs.mount: Deactivated successfully. Sep 12 17:13:14.947313 containerd[2023]: time="2025-09-12T17:13:14.900984529Z" level=info msg="shim disconnected" id=4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125 namespace=k8s.io Sep 12 17:13:14.947313 containerd[2023]: time="2025-09-12T17:13:14.946992961Z" level=warning msg="cleaning up after shim disconnected" id=4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125 namespace=k8s.io Sep 12 17:13:14.947313 containerd[2023]: time="2025-09-12T17:13:14.947022553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:14.969728 containerd[2023]: time="2025-09-12T17:13:14.969603577Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:13:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:13:15.676759 kubelet[3226]: I0912 17:13:15.676519 3226 scope.go:117] "RemoveContainer" containerID="4640fb72a2aea2220ab555898b51a1022fbecb21019752eebe1fc7b5cc14f125" Sep 12 17:13:15.680624 containerd[2023]: time="2025-09-12T17:13:15.680383945Z" level=info msg="CreateContainer within sandbox \"ccad70116b0d94174754533ad495e5886424a92036e0bb9b28c09d6905814e31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:13:15.719945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348453610.mount: Deactivated successfully. Sep 12 17:13:15.726910 containerd[2023]: time="2025-09-12T17:13:15.726260305Z" level=info msg="CreateContainer within sandbox \"ccad70116b0d94174754533ad495e5886424a92036e0bb9b28c09d6905814e31\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"554193f4b44ea739f3d007c94b411c6583ba73d0a18b5eb22675f3cf8a950281\"" Sep 12 17:13:15.727488 containerd[2023]: time="2025-09-12T17:13:15.727205317Z" level=info msg="StartContainer for \"554193f4b44ea739f3d007c94b411c6583ba73d0a18b5eb22675f3cf8a950281\"" Sep 12 17:13:15.788925 systemd[1]: run-containerd-runc-k8s.io-554193f4b44ea739f3d007c94b411c6583ba73d0a18b5eb22675f3cf8a950281-runc.4tBJYv.mount: Deactivated successfully. Sep 12 17:13:15.803130 systemd[1]: Started cri-containerd-554193f4b44ea739f3d007c94b411c6583ba73d0a18b5eb22675f3cf8a950281.scope - libcontainer container 554193f4b44ea739f3d007c94b411c6583ba73d0a18b5eb22675f3cf8a950281. Sep 12 17:13:15.859312 containerd[2023]: time="2025-09-12T17:13:15.858751622Z" level=info msg="StartContainer for \"554193f4b44ea739f3d007c94b411c6583ba73d0a18b5eb22675f3cf8a950281\" returns successfully" Sep 12 17:13:15.868140 systemd[1]: cri-containerd-fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042.scope: Deactivated successfully. Sep 12 17:13:15.869208 systemd[1]: cri-containerd-fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042.scope: Consumed 6.036s CPU time, 19.5M memory peak, 0B memory swap peak. Sep 12 17:13:15.926298 containerd[2023]: time="2025-09-12T17:13:15.926223782Z" level=info msg="shim disconnected" id=fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042 namespace=k8s.io Sep 12 17:13:15.926909 containerd[2023]: time="2025-09-12T17:13:15.926554634Z" level=warning msg="cleaning up after shim disconnected" id=fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042 namespace=k8s.io Sep 12 17:13:15.926909 containerd[2023]: time="2025-09-12T17:13:15.926583062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:16.681883 kubelet[3226]: I0912 17:13:16.681749 3226 scope.go:117] "RemoveContainer" containerID="fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042" Sep 12 17:13:16.687635 containerd[2023]: time="2025-09-12T17:13:16.687292382Z" level=info msg="CreateContainer within sandbox \"41b33c90d1467675003fd041abe51e01610d79b98c38b002e790df71e4a3589f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:13:16.703694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fef36a33ca589e526a3eda91f58f7fd160a1ae5c27f9d375c9a7c3dd48be7042-rootfs.mount: Deactivated successfully. Sep 12 17:13:16.726981 containerd[2023]: time="2025-09-12T17:13:16.724321850Z" level=info msg="CreateContainer within sandbox \"41b33c90d1467675003fd041abe51e01610d79b98c38b002e790df71e4a3589f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f320971aa941c1ebe82b047c5e4bbbdddc5ac897793995aeaefdfe7d491b7b82\"" Sep 12 17:13:16.729480 containerd[2023]: time="2025-09-12T17:13:16.727962362Z" level=info msg="StartContainer for \"f320971aa941c1ebe82b047c5e4bbbdddc5ac897793995aeaefdfe7d491b7b82\"" Sep 12 17:13:16.787772 systemd[1]: Started cri-containerd-f320971aa941c1ebe82b047c5e4bbbdddc5ac897793995aeaefdfe7d491b7b82.scope - libcontainer container f320971aa941c1ebe82b047c5e4bbbdddc5ac897793995aeaefdfe7d491b7b82. Sep 12 17:13:16.873746 containerd[2023]: time="2025-09-12T17:13:16.872662443Z" level=info msg="StartContainer for \"f320971aa941c1ebe82b047c5e4bbbdddc5ac897793995aeaefdfe7d491b7b82\" returns successfully" Sep 12 17:13:17.885525 kubelet[3226]: E0912 17:13:17.885429 3226 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-10?timeout=10s\": context deadline exceeded" Sep 12 17:13:21.341591 systemd[1]: cri-containerd-fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8.scope: Deactivated successfully. Sep 12 17:13:21.342166 systemd[1]: cri-containerd-fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8.scope: Consumed 6.682s CPU time, 15.6M memory peak, 0B memory swap peak. Sep 12 17:13:21.385291 containerd[2023]: time="2025-09-12T17:13:21.385159205Z" level=info msg="shim disconnected" id=fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8 namespace=k8s.io Sep 12 17:13:21.385291 containerd[2023]: time="2025-09-12T17:13:21.385237289Z" level=warning msg="cleaning up after shim disconnected" id=fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8 namespace=k8s.io Sep 12 17:13:21.385291 containerd[2023]: time="2025-09-12T17:13:21.385259153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:21.393529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8-rootfs.mount: Deactivated successfully. Sep 12 17:13:21.706253 kubelet[3226]: I0912 17:13:21.706062 3226 scope.go:117] "RemoveContainer" containerID="fc3ab3de68620401b0b04600b95e70eab68a7138fe605fe774a9404add8097d8" Sep 12 17:13:21.710328 containerd[2023]: time="2025-09-12T17:13:21.710267767Z" level=info msg="CreateContainer within sandbox \"89145df897331a2d0ad69acb39b0ad909ca008077bd27c34caa18032e74f3029\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:13:21.744889 containerd[2023]: time="2025-09-12T17:13:21.744787699Z" level=info msg="CreateContainer within sandbox \"89145df897331a2d0ad69acb39b0ad909ca008077bd27c34caa18032e74f3029\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6d5d140ce69d39d5f1007537a6ba9d4de8e883058b69498f7454a7dd4f18a19c\"" Sep 12 17:13:21.745722 containerd[2023]: time="2025-09-12T17:13:21.745588303Z" level=info msg="StartContainer for \"6d5d140ce69d39d5f1007537a6ba9d4de8e883058b69498f7454a7dd4f18a19c\"" Sep 12 17:13:21.810781 systemd[1]: Started cri-containerd-6d5d140ce69d39d5f1007537a6ba9d4de8e883058b69498f7454a7dd4f18a19c.scope - libcontainer container 6d5d140ce69d39d5f1007537a6ba9d4de8e883058b69498f7454a7dd4f18a19c. Sep 12 17:13:21.880743 containerd[2023]: time="2025-09-12T17:13:21.880611932Z" level=info msg="StartContainer for \"6d5d140ce69d39d5f1007537a6ba9d4de8e883058b69498f7454a7dd4f18a19c\" returns successfully"