Sep 9 00:35:52.848064 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:35:52.848086 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Sep 8 22:48:00 -00 2025 Sep 9 00:35:52.848095 kernel: KASLR enabled Sep 9 00:35:52.848102 kernel: efi: EFI v2.7 by EDK II Sep 9 00:35:52.848108 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 9 00:35:52.848114 kernel: random: crng init done Sep 9 00:35:52.848121 kernel: ACPI: Early table checksum verification disabled Sep 9 00:35:52.848127 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 9 00:35:52.848133 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:35:52.848140 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848146 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848160 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848167 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848173 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848181 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848189 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848195 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848202 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:52.848209 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:35:52.848215 kernel: NUMA: Failed to initialise from firmware Sep 9 00:35:52.848222 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:35:52.848228 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 9 00:35:52.848234 kernel: Zone ranges: Sep 9 00:35:52.848240 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:35:52.848247 kernel: DMA32 empty Sep 9 00:35:52.848254 kernel: Normal empty Sep 9 00:35:52.848260 kernel: Movable zone start for each node Sep 9 00:35:52.848266 kernel: Early memory node ranges Sep 9 00:35:52.848273 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 9 00:35:52.848279 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 9 00:35:52.848286 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 9 00:35:52.848292 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 00:35:52.848298 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 00:35:52.848304 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 00:35:52.848311 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 00:35:52.848317 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:35:52.848323 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:35:52.848331 kernel: psci: probing for conduit method from ACPI. Sep 9 00:35:52.848337 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:35:52.848344 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:35:52.848352 kernel: psci: Trusted OS migration not required Sep 9 00:35:52.848359 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:35:52.848366 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:35:52.848374 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 9 00:35:52.848381 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 9 00:35:52.848387 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:35:52.848394 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:35:52.848401 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:35:52.848407 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:35:52.848414 kernel: CPU features: detected: Spectre-v4 Sep 9 00:35:52.848421 kernel: CPU features: detected: Spectre-BHB Sep 9 00:35:52.848427 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:35:52.848434 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:35:52.848442 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:35:52.848449 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:35:52.848455 kernel: alternatives: applying boot alternatives Sep 9 00:35:52.848463 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:35:52.848470 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:35:52.848477 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:35:52.848484 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:35:52.848490 kernel: Fallback order for Node 0: 0 Sep 9 00:35:52.848503 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:35:52.848509 kernel: Policy zone: DMA Sep 9 00:35:52.848516 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:35:52.848524 kernel: software IO TLB: area num 4. Sep 9 00:35:52.848530 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 9 00:35:52.848547 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Sep 9 00:35:52.848554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:35:52.848561 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:35:52.848568 kernel: rcu: RCU event tracing is enabled. Sep 9 00:35:52.848575 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:35:52.848582 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:35:52.848589 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:35:52.848595 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:35:52.848602 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:35:52.848611 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:35:52.848618 kernel: GICv3: 256 SPIs implemented Sep 9 00:35:52.848625 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:35:52.848632 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:35:52.848638 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 00:35:52.848645 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:35:52.848652 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:35:52.848659 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:35:52.848665 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:35:52.848672 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 9 00:35:52.848679 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 9 00:35:52.848686 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:35:52.848694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:52.848701 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:35:52.848708 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:35:52.848714 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:35:52.848721 kernel: arm-pv: using stolen time PV Sep 9 00:35:52.848728 kernel: Console: colour dummy device 80x25 Sep 9 00:35:52.848735 kernel: ACPI: Core revision 20230628 Sep 9 00:35:52.848742 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:35:52.848749 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:35:52.848756 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:35:52.848764 kernel: landlock: Up and running. Sep 9 00:35:52.848771 kernel: SELinux: Initializing. Sep 9 00:35:52.848778 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:35:52.848785 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:35:52.848792 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:35:52.848799 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:35:52.848806 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:35:52.848813 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:35:52.848820 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:35:52.848828 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:35:52.848834 kernel: Remapping and enabling EFI services. Sep 9 00:35:52.848841 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:35:52.848848 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:35:52.848855 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:35:52.848862 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 9 00:35:52.848869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:52.848876 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:35:52.848883 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:35:52.848890 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:35:52.848903 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 9 00:35:52.848912 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:52.848927 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:35:52.848936 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:35:52.848943 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:35:52.848951 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 9 00:35:52.848958 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:52.848965 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:35:52.848973 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:35:52.848981 kernel: SMP: Total of 4 processors activated. Sep 9 00:35:52.848988 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:35:52.849010 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:35:52.849017 kernel: CPU features: detected: Common not Private translations Sep 9 00:35:52.849024 kernel: CPU features: detected: CRC32 instructions Sep 9 00:35:52.849031 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 00:35:52.849039 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:35:52.849046 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:35:52.849054 kernel: CPU features: detected: Privileged Access Never Sep 9 00:35:52.849062 kernel: CPU features: detected: RAS Extension Support Sep 9 00:35:52.849069 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:35:52.849077 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:35:52.849084 kernel: alternatives: applying system-wide alternatives Sep 9 00:35:52.849091 kernel: devtmpfs: initialized Sep 9 00:35:52.849099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:35:52.849106 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:35:52.849113 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:35:52.849122 kernel: SMBIOS 3.0.0 present. Sep 9 00:35:52.849129 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 9 00:35:52.849137 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:35:52.849144 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:35:52.849156 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:35:52.849164 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:35:52.849171 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:35:52.849179 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 9 00:35:52.849188 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:35:52.849196 kernel: cpuidle: using governor menu Sep 9 00:35:52.849203 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:35:52.849210 kernel: ASID allocator initialised with 32768 entries Sep 9 00:35:52.849217 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:35:52.849225 kernel: Serial: AMBA PL011 UART driver Sep 9 00:35:52.849232 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 00:35:52.849239 kernel: Modules: 0 pages in range for non-PLT usage Sep 9 00:35:52.849246 kernel: Modules: 509008 pages in range for PLT usage Sep 9 00:35:52.849253 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:35:52.849262 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:35:52.849269 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:35:52.849276 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 00:35:52.849284 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:35:52.849291 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:35:52.849298 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:35:52.849305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 00:35:52.849312 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:35:52.849319 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:35:52.849328 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:35:52.849335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:35:52.849342 kernel: ACPI: Interpreter enabled Sep 9 00:35:52.849349 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:35:52.849356 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:35:52.849364 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:35:52.849371 kernel: printk: console [ttyAMA0] enabled Sep 9 00:35:52.849378 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:35:52.849505 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:35:52.849615 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:35:52.849684 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:35:52.849751 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:35:52.849815 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:35:52.849824 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:35:52.849832 kernel: PCI host bridge to bus 0000:00 Sep 9 00:35:52.849900 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:35:52.849963 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:35:52.850019 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:35:52.850075 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:35:52.850164 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:35:52.850241 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:35:52.850309 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:35:52.850379 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:35:52.850445 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:35:52.850510 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:35:52.850629 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:35:52.850696 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:35:52.850755 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:35:52.850811 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:35:52.850872 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:35:52.850881 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:35:52.850889 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:35:52.850896 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:35:52.850903 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:35:52.850911 kernel: iommu: Default domain type: Translated Sep 9 00:35:52.850918 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:35:52.850925 kernel: efivars: Registered efivars operations Sep 9 00:35:52.850935 kernel: vgaarb: loaded Sep 9 00:35:52.850942 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:35:52.850949 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:35:52.850957 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:35:52.850964 kernel: pnp: PnP ACPI init Sep 9 00:35:52.851040 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:35:52.851051 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:35:52.851058 kernel: NET: Registered PF_INET protocol family Sep 9 00:35:52.851066 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:35:52.851075 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:35:52.851083 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:35:52.851090 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:35:52.851097 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:35:52.851105 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:35:52.851112 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:35:52.851120 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:35:52.851127 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:35:52.851136 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:35:52.851143 kernel: kvm [1]: HYP mode not available Sep 9 00:35:52.851191 kernel: Initialise system trusted keyrings Sep 9 00:35:52.851200 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:35:52.851208 kernel: Key type asymmetric registered Sep 9 00:35:52.851215 kernel: Asymmetric key parser 'x509' registered Sep 9 00:35:52.851222 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:35:52.851229 kernel: io scheduler mq-deadline registered Sep 9 00:35:52.851236 kernel: io scheduler kyber registered Sep 9 00:35:52.851244 kernel: io scheduler bfq registered Sep 9 00:35:52.851254 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:35:52.851261 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:35:52.851268 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:35:52.851363 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:35:52.851374 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:35:52.851381 kernel: thunder_xcv, ver 1.0 Sep 9 00:35:52.851388 kernel: thunder_bgx, ver 1.0 Sep 9 00:35:52.851395 kernel: nicpf, ver 1.0 Sep 9 00:35:52.851403 kernel: nicvf, ver 1.0 Sep 9 00:35:52.851482 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:35:52.851563 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:35:52 UTC (1757378152) Sep 9 00:35:52.851576 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:35:52.851584 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:35:52.851591 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 9 00:35:52.851598 kernel: watchdog: Hard watchdog permanently disabled Sep 9 00:35:52.851606 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:35:52.851613 kernel: Segment Routing with IPv6 Sep 9 00:35:52.851624 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:35:52.851631 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:35:52.851638 kernel: Key type dns_resolver registered Sep 9 00:35:52.851645 kernel: registered taskstats version 1 Sep 9 00:35:52.851652 kernel: Loading compiled-in X.509 certificates Sep 9 00:35:52.851660 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: f5b097e6797722e0cc665195a3c415b6be267631' Sep 9 00:35:52.851667 kernel: Key type .fscrypt registered Sep 9 00:35:52.851674 kernel: Key type fscrypt-provisioning registered Sep 9 00:35:52.851682 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:35:52.851690 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:35:52.851698 kernel: ima: No architecture policies found Sep 9 00:35:52.851705 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:35:52.851712 kernel: clk: Disabling unused clocks Sep 9 00:35:52.851720 kernel: Freeing unused kernel memory: 39424K Sep 9 00:35:52.851727 kernel: Run /init as init process Sep 9 00:35:52.851734 kernel: with arguments: Sep 9 00:35:52.851742 kernel: /init Sep 9 00:35:52.851749 kernel: with environment: Sep 9 00:35:52.851758 kernel: HOME=/ Sep 9 00:35:52.851765 kernel: TERM=linux Sep 9 00:35:52.851772 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:35:52.851781 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:35:52.851791 systemd[1]: Detected virtualization kvm. Sep 9 00:35:52.851799 systemd[1]: Detected architecture arm64. Sep 9 00:35:52.851806 systemd[1]: Running in initrd. Sep 9 00:35:52.851816 systemd[1]: No hostname configured, using default hostname. Sep 9 00:35:52.851823 systemd[1]: Hostname set to . Sep 9 00:35:52.851831 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:35:52.851840 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:35:52.851848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:35:52.851856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:35:52.851864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:35:52.851872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:35:52.851881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:35:52.851889 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:35:52.851899 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:35:52.851907 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:35:52.851915 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:35:52.851923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:35:52.851930 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:35:52.851940 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:35:52.851948 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:35:52.851955 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:35:52.851963 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:35:52.851971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:35:52.851979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:35:52.851986 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:35:52.851994 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:35:52.852002 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:35:52.852011 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:35:52.852019 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:35:52.852027 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:35:52.852034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:35:52.852042 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:35:52.852050 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:35:52.852057 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:35:52.852070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:35:52.852079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:52.852087 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:35:52.852095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:35:52.852103 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:35:52.852111 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:35:52.852136 systemd-journald[237]: Collecting audit messages is disabled. Sep 9 00:35:52.852163 systemd-journald[237]: Journal started Sep 9 00:35:52.852184 systemd-journald[237]: Runtime Journal (/run/log/journal/99765638dadd4b18b4ec7d7a4efc2611) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:35:52.846994 systemd-modules-load[239]: Inserted module 'overlay' Sep 9 00:35:52.855016 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:35:52.854928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:52.856104 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:35:52.860055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:35:52.861570 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:35:52.863581 kernel: Bridge firewalling registered Sep 9 00:35:52.863087 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 9 00:35:52.863225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:35:52.864777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:35:52.868904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:35:52.871699 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:35:52.879351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:35:52.880702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:35:52.881833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:35:52.892688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:35:52.893698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:35:52.896405 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:35:52.909201 dracut-cmdline[281]: dracut-dracut-053 Sep 9 00:35:52.911867 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:35:52.919967 systemd-resolved[275]: Positive Trust Anchors: Sep 9 00:35:52.919985 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:35:52.920016 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:35:52.924720 systemd-resolved[275]: Defaulting to hostname 'linux'. Sep 9 00:35:52.925670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:35:52.928886 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:35:52.985566 kernel: SCSI subsystem initialized Sep 9 00:35:52.990560 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:35:52.997565 kernel: iscsi: registered transport (tcp) Sep 9 00:35:53.010674 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:35:53.010716 kernel: QLogic iSCSI HBA Driver Sep 9 00:35:53.055026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:35:53.065984 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:35:53.082814 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:35:53.082859 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:35:53.083628 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:35:53.129565 kernel: raid6: neonx8 gen() 15760 MB/s Sep 9 00:35:53.146559 kernel: raid6: neonx4 gen() 15672 MB/s Sep 9 00:35:53.163562 kernel: raid6: neonx2 gen() 13230 MB/s Sep 9 00:35:53.180563 kernel: raid6: neonx1 gen() 10513 MB/s Sep 9 00:35:53.197566 kernel: raid6: int64x8 gen() 6949 MB/s Sep 9 00:35:53.214566 kernel: raid6: int64x4 gen() 7341 MB/s Sep 9 00:35:53.231562 kernel: raid6: int64x2 gen() 6125 MB/s Sep 9 00:35:53.248564 kernel: raid6: int64x1 gen() 5044 MB/s Sep 9 00:35:53.248604 kernel: raid6: using algorithm neonx8 gen() 15760 MB/s Sep 9 00:35:53.265557 kernel: raid6: .... xor() 12046 MB/s, rmw enabled Sep 9 00:35:53.265598 kernel: raid6: using neon recovery algorithm Sep 9 00:35:53.272927 kernel: xor: measuring software checksum speed Sep 9 00:35:53.272961 kernel: 8regs : 19702 MB/sec Sep 9 00:35:53.272971 kernel: 32regs : 19674 MB/sec Sep 9 00:35:53.273920 kernel: arm64_neon : 26936 MB/sec Sep 9 00:35:53.273945 kernel: xor: using function: arm64_neon (26936 MB/sec) Sep 9 00:35:53.321573 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:35:53.332228 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:35:53.343893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:35:53.359890 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 9 00:35:53.363106 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:35:53.373088 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:35:53.387309 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 9 00:35:53.416181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:35:53.427732 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:35:53.466886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:35:53.488835 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:35:53.506594 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:35:53.507615 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:35:53.511612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:35:53.514665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:35:53.523562 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 00:35:53.530322 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:35:53.523853 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:35:53.544278 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:35:53.544300 kernel: GPT:9289727 != 19775487 Sep 9 00:35:53.544310 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:35:53.544320 kernel: GPT:9289727 != 19775487 Sep 9 00:35:53.544328 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:35:53.544346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:53.538445 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:35:53.553210 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:35:53.553315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:35:53.556844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:35:53.567867 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (506) Sep 9 00:35:53.567893 kernel: BTRFS: device fsid 7c1eef97-905d-47ac-bb4a-010204f95541 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (522) Sep 9 00:35:53.559140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:35:53.559288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:53.564495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:53.573805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:53.583694 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:35:53.584931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:53.593643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:35:53.597336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:35:53.598363 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:35:53.603528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:35:53.615721 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:35:53.617806 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:35:53.624549 disk-uuid[551]: Primary Header is updated. Sep 9 00:35:53.624549 disk-uuid[551]: Secondary Entries is updated. Sep 9 00:35:53.624549 disk-uuid[551]: Secondary Header is updated. Sep 9 00:35:53.627556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:53.631556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:53.639652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:35:54.637035 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:54.637087 disk-uuid[552]: The operation has completed successfully. Sep 9 00:35:54.661490 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:35:54.661619 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:35:54.692759 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:35:54.695630 sh[573]: Success Sep 9 00:35:54.705568 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:35:54.759173 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:35:54.761090 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:35:54.762583 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:35:54.774571 kernel: BTRFS info (device dm-0): first mount of filesystem 7c1eef97-905d-47ac-bb4a-010204f95541 Sep 9 00:35:54.774607 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:35:54.774618 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:35:54.775148 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:35:54.776545 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:35:54.780856 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:35:54.781971 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:35:54.790712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:35:54.792131 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:35:54.803098 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:35:54.803148 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:35:54.803160 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:35:54.806573 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:35:54.814087 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:35:54.816570 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:35:54.822972 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:35:54.829730 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:35:54.893166 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:35:54.899712 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:35:54.902372 ignition[668]: Ignition 2.19.0 Sep 9 00:35:54.902378 ignition[668]: Stage: fetch-offline Sep 9 00:35:54.902412 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:54.902420 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:54.902586 ignition[668]: parsed url from cmdline: "" Sep 9 00:35:54.902589 ignition[668]: no config URL provided Sep 9 00:35:54.902594 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:35:54.902600 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:35:54.902623 ignition[668]: op(1): [started] loading QEMU firmware config module Sep 9 00:35:54.902628 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:35:54.911811 ignition[668]: op(1): [finished] loading QEMU firmware config module Sep 9 00:35:54.921196 systemd-networkd[764]: lo: Link UP Sep 9 00:35:54.921206 systemd-networkd[764]: lo: Gained carrier Sep 9 00:35:54.921866 systemd-networkd[764]: Enumeration completed Sep 9 00:35:54.921940 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:35:54.922969 systemd[1]: Reached target network.target - Network. Sep 9 00:35:54.924593 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:54.924596 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:35:54.925423 systemd-networkd[764]: eth0: Link UP Sep 9 00:35:54.925426 systemd-networkd[764]: eth0: Gained carrier Sep 9 00:35:54.925433 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:54.945581 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:35:54.964002 ignition[668]: parsing config with SHA512: f9a52c2a8c30b48311a6ff50184ddbbdbe365460607bcd52cca47e742a6361e8c5c8fe419b04cb8f4caeb58ed63022b5234abdec4a772de3544f9e0e81ef6a9f Sep 9 00:35:54.969338 unknown[668]: fetched base config from "system" Sep 9 00:35:54.969396 unknown[668]: fetched user config from "qemu" Sep 9 00:35:54.970596 ignition[668]: fetch-offline: fetch-offline passed Sep 9 00:35:54.970673 ignition[668]: Ignition finished successfully Sep 9 00:35:54.972994 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:35:54.974067 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:35:54.983754 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:35:54.994041 ignition[770]: Ignition 2.19.0 Sep 9 00:35:54.994051 ignition[770]: Stage: kargs Sep 9 00:35:54.994216 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:54.994225 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:54.995067 ignition[770]: kargs: kargs passed Sep 9 00:35:54.995106 ignition[770]: Ignition finished successfully Sep 9 00:35:54.997306 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:35:55.011680 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:35:55.021438 ignition[779]: Ignition 2.19.0 Sep 9 00:35:55.021447 ignition[779]: Stage: disks Sep 9 00:35:55.021613 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:55.021622 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:55.022436 ignition[779]: disks: disks passed Sep 9 00:35:55.022474 ignition[779]: Ignition finished successfully Sep 9 00:35:55.026029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:35:55.027199 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:35:55.029248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:35:55.030797 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:35:55.032308 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:35:55.033902 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:35:55.045712 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:35:55.058768 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:35:55.064983 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:35:55.079661 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:35:55.122726 kernel: EXT4-fs (vda9): mounted filesystem d987a4c8-1278-4a59-9d40-0c91e08e9423 r/w with ordered data mode. Quota mode: none. Sep 9 00:35:55.123127 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:35:55.124160 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:35:55.133647 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:35:55.136653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:35:55.137492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:35:55.137530 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:35:55.137567 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:35:55.146793 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:35:55.148306 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:35:55.155350 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (799) Sep 9 00:35:55.155392 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:35:55.155403 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:35:55.156418 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:35:55.160555 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:35:55.161718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:35:55.196532 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:35:55.201967 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:35:55.205233 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:35:55.208761 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:35:55.281170 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:35:55.288755 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:35:55.290903 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:35:55.295593 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:35:55.310929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:35:55.315119 ignition[912]: INFO : Ignition 2.19.0 Sep 9 00:35:55.315897 ignition[912]: INFO : Stage: mount Sep 9 00:35:55.317613 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:55.317613 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:55.317613 ignition[912]: INFO : mount: mount passed Sep 9 00:35:55.317613 ignition[912]: INFO : Ignition finished successfully Sep 9 00:35:55.318884 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:35:55.327668 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:35:55.773211 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:35:55.781717 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:35:55.788291 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (926) Sep 9 00:35:55.788330 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:35:55.788345 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:35:55.789563 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:35:55.791561 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:35:55.792570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:35:55.808368 ignition[944]: INFO : Ignition 2.19.0 Sep 9 00:35:55.808368 ignition[944]: INFO : Stage: files Sep 9 00:35:55.809629 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:55.809629 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:55.809629 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:35:55.813083 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:35:55.813083 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:35:55.813083 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:35:55.818172 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:35:55.818172 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:35:55.818172 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 00:35:55.818172 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 00:35:55.814367 unknown[944]: wrote ssh authorized keys file for user: core Sep 9 00:35:55.858422 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:35:56.207605 systemd-networkd[764]: eth0: Gained IPv6LL Sep 9 00:35:56.220793 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 00:35:56.220793 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:35:56.224296 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 00:35:56.663729 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:35:57.255699 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:35:57.255699 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:35:57.258884 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:35:57.260794 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:35:57.260794 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:35:57.260794 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:35:57.260794 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:35:57.260794 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:35:57.260794 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:35:57.260794 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:35:57.298206 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:35:57.302090 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:35:57.303299 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:35:57.303299 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:35:57.303299 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:35:57.303299 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:35:57.303299 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:35:57.303299 ignition[944]: INFO : files: files passed Sep 9 00:35:57.303299 ignition[944]: INFO : Ignition finished successfully Sep 9 00:35:57.305064 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:35:57.318723 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:35:57.320805 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:35:57.324437 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:35:57.324556 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:35:57.328961 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:35:57.332461 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:35:57.332461 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:35:57.336711 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:35:57.336147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:35:57.338334 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:35:57.350760 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:35:57.368761 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:35:57.368869 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:35:57.371187 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:35:57.372853 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:35:57.374374 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:35:57.375617 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:35:57.391601 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:35:57.394878 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:35:57.406040 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:35:57.407035 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:35:57.408534 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:35:57.410094 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:35:57.410222 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:35:57.412495 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:35:57.414089 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:35:57.415801 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:35:57.417390 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:35:57.419401 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:35:57.421153 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:35:57.422885 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:35:57.424744 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:35:57.426266 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:35:57.427666 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:35:57.428850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:35:57.428966 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:35:57.431112 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:35:57.432508 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:35:57.434143 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:35:57.437692 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:35:57.439718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:35:57.439839 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:35:57.442213 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:35:57.442325 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:35:57.444105 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:35:57.445378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:35:57.449575 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:35:57.450638 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:35:57.452329 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:35:57.453555 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:35:57.453639 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:35:57.454886 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:35:57.454958 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:35:57.456429 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:35:57.456527 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:35:57.457945 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:35:57.458035 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:35:57.466761 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:35:57.467457 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:35:57.467588 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:35:57.473774 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:35:57.474530 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:35:57.474670 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:35:57.477010 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:35:57.477111 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:35:57.482756 ignition[998]: INFO : Ignition 2.19.0 Sep 9 00:35:57.482756 ignition[998]: INFO : Stage: umount Sep 9 00:35:57.484036 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:57.484036 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:57.485035 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:35:57.488468 ignition[998]: INFO : umount: umount passed Sep 9 00:35:57.488468 ignition[998]: INFO : Ignition finished successfully Sep 9 00:35:57.485144 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:35:57.486947 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:35:57.487024 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:35:57.488214 systemd[1]: Stopped target network.target - Network. Sep 9 00:35:57.489424 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:35:57.489482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:35:57.491673 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:35:57.491714 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:35:57.493084 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:35:57.493121 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:35:57.495887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:35:57.495941 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:35:57.498939 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:35:57.500844 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:35:57.502888 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:35:57.513629 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 9 00:35:57.514709 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:35:57.514823 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:35:57.516517 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:35:57.516612 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:35:57.518610 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:35:57.518731 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:35:57.522588 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:35:57.522651 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:35:57.536698 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:35:57.537448 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:35:57.537510 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:35:57.543205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:35:57.543273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:35:57.545702 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:35:57.545752 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:35:57.548384 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:35:57.555879 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:35:57.555955 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:35:57.557938 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:35:57.558012 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:35:57.559968 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:35:57.560058 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:35:57.567080 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:35:57.567241 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:35:57.570970 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:35:57.571007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:35:57.572783 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:35:57.573195 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:35:57.574560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:35:57.574610 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:35:57.577168 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:35:57.577217 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:35:57.579939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:35:57.579986 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:35:57.597783 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:35:57.598608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:35:57.598668 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:35:57.600590 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:35:57.600642 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:57.605146 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:35:57.606079 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:35:57.607161 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:35:57.609716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:35:57.622361 systemd[1]: Switching root. Sep 9 00:35:57.645471 systemd-journald[237]: Journal stopped Sep 9 00:35:58.364905 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 9 00:35:58.364964 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:35:58.364979 kernel: SELinux: policy capability open_perms=1 Sep 9 00:35:58.364988 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:35:58.364997 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:35:58.365007 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:35:58.365016 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:35:58.365025 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:35:58.365035 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:35:58.365044 kernel: audit: type=1403 audit(1757378157.796:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:35:58.365057 systemd[1]: Successfully loaded SELinux policy in 31.289ms. Sep 9 00:35:58.365075 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.074ms. Sep 9 00:35:58.365087 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:35:58.365098 systemd[1]: Detected virtualization kvm. Sep 9 00:35:58.365108 systemd[1]: Detected architecture arm64. Sep 9 00:35:58.365118 systemd[1]: Detected first boot. Sep 9 00:35:58.365136 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:35:58.365148 zram_generator::config[1042]: No configuration found. Sep 9 00:35:58.365163 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:35:58.365175 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:35:58.365185 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:35:58.365195 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:35:58.365206 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:35:58.365218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:35:58.365229 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:35:58.365239 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:35:58.365249 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:35:58.365262 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:35:58.365272 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:35:58.365282 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:35:58.365296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:35:58.365307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:35:58.365318 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:35:58.365328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:35:58.365339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:35:58.365349 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:35:58.365362 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 00:35:58.365373 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:35:58.365383 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:35:58.365393 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:35:58.365404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:35:58.365414 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:35:58.365424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:35:58.365436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:35:58.365448 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:35:58.365459 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:35:58.365469 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:35:58.365480 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:35:58.365493 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:35:58.365504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:35:58.365515 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:35:58.365526 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:35:58.365614 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:35:58.365631 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:35:58.365642 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:35:58.365652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:35:58.365662 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:35:58.365672 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:35:58.365683 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:35:58.365694 systemd[1]: Reached target machines.target - Containers. Sep 9 00:35:58.365704 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:35:58.365716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:35:58.365727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:35:58.365737 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:35:58.365747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:35:58.365759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:35:58.365769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:35:58.365780 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:35:58.365790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:35:58.365801 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:35:58.365813 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:35:58.365824 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:35:58.365835 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:35:58.365845 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:35:58.365855 kernel: fuse: init (API version 7.39) Sep 9 00:35:58.365865 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:35:58.365875 kernel: loop: module loaded Sep 9 00:35:58.365885 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:35:58.365896 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:35:58.365907 kernel: ACPI: bus type drm_connector registered Sep 9 00:35:58.365917 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:35:58.365928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:35:58.365938 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:35:58.365948 systemd[1]: Stopped verity-setup.service. Sep 9 00:35:58.365958 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:35:58.365969 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:35:58.366009 systemd-journald[1113]: Collecting audit messages is disabled. Sep 9 00:35:58.366032 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:35:58.366044 systemd-journald[1113]: Journal started Sep 9 00:35:58.366064 systemd-journald[1113]: Runtime Journal (/run/log/journal/99765638dadd4b18b4ec7d7a4efc2611) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:35:58.166708 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:35:58.182894 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:35:58.183245 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:35:58.367689 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:35:58.368267 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:35:58.369248 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:35:58.370236 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:35:58.372564 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:35:58.373685 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:35:58.374993 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:35:58.375146 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:35:58.376297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:58.376423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:35:58.377640 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:35:58.377771 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:35:58.378820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:58.378960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:35:58.380150 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:35:58.380280 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:35:58.381510 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:58.381657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:35:58.382730 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:35:58.383896 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:35:58.385348 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:35:58.396743 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:35:58.411628 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:35:58.413396 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:35:58.414324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:35:58.414358 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:35:58.416071 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:35:58.417957 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:35:58.419771 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:35:58.420623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:35:58.422091 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:35:58.424681 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:35:58.425756 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:58.426882 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:35:58.428741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:35:58.429722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:35:58.431840 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:35:58.433300 systemd-journald[1113]: Time spent on flushing to /var/log/journal/99765638dadd4b18b4ec7d7a4efc2611 is 18.559ms for 853 entries. Sep 9 00:35:58.433300 systemd-journald[1113]: System Journal (/var/log/journal/99765638dadd4b18b4ec7d7a4efc2611) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:35:58.469243 systemd-journald[1113]: Received client request to flush runtime journal. Sep 9 00:35:58.469289 kernel: loop0: detected capacity change from 0 to 114432 Sep 9 00:35:58.435502 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:35:58.437854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:35:58.441987 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:35:58.443745 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:35:58.445203 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:35:58.447896 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:35:58.451704 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:35:58.460728 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:35:58.464300 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:35:58.469818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:35:58.472153 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:35:58.478066 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:35:58.480009 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:35:58.494901 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:35:58.495691 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:35:58.497591 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:35:58.502577 kernel: loop1: detected capacity change from 0 to 114328 Sep 9 00:35:58.508704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:35:58.534937 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 9 00:35:58.534961 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 9 00:35:58.538818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:35:58.542554 kernel: loop2: detected capacity change from 0 to 211168 Sep 9 00:35:58.596564 kernel: loop3: detected capacity change from 0 to 114432 Sep 9 00:35:58.602558 kernel: loop4: detected capacity change from 0 to 114328 Sep 9 00:35:58.609563 kernel: loop5: detected capacity change from 0 to 211168 Sep 9 00:35:58.614874 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:35:58.615260 (sd-merge)[1179]: Merged extensions into '/usr'. Sep 9 00:35:58.618794 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:35:58.619309 systemd[1]: Reloading... Sep 9 00:35:58.688699 zram_generator::config[1208]: No configuration found. Sep 9 00:35:58.750373 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:35:58.777735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:58.814405 systemd[1]: Reloading finished in 194 ms. Sep 9 00:35:58.844339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:35:58.845837 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:35:58.862725 systemd[1]: Starting ensure-sysext.service... Sep 9 00:35:58.864650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:35:58.870286 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:35:58.870301 systemd[1]: Reloading... Sep 9 00:35:58.881061 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:35:58.881343 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:35:58.881997 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:35:58.882229 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 9 00:35:58.882282 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 9 00:35:58.884534 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:35:58.884556 systemd-tmpfiles[1240]: Skipping /boot Sep 9 00:35:58.891424 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:35:58.891441 systemd-tmpfiles[1240]: Skipping /boot Sep 9 00:35:58.924566 zram_generator::config[1270]: No configuration found. Sep 9 00:35:59.005011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:59.041033 systemd[1]: Reloading finished in 170 ms. Sep 9 00:35:59.054527 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:35:59.063128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:35:59.070339 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:35:59.072492 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:35:59.074730 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:35:59.078639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:35:59.081390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:35:59.083764 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:35:59.086827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:35:59.089064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:35:59.094801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:35:59.098199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:35:59.099325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:35:59.101793 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:35:59.103423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:59.105586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:35:59.106914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:59.107028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:35:59.108709 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:35:59.114527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:35:59.117183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:35:59.118615 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Sep 9 00:35:59.120444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:35:59.121745 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:35:59.123155 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:35:59.126550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:35:59.129156 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:59.129286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:35:59.137395 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:35:59.138884 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:35:59.140366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:59.140651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:35:59.142924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:35:59.150264 systemd[1]: Finished ensure-sysext.service. Sep 9 00:35:59.154597 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:35:59.165374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:59.165514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:35:59.166915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:35:59.173739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:35:59.177229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:35:59.179057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:35:59.183473 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:35:59.184759 augenrules[1366]: No rules Sep 9 00:35:59.185696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:59.194731 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:35:59.195520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:35:59.196046 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:35:59.197896 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:35:59.198599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:35:59.205560 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1359) Sep 9 00:35:59.207443 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 00:35:59.225179 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:59.225624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:35:59.229848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:35:59.240650 systemd-resolved[1307]: Positive Trust Anchors: Sep 9 00:35:59.241345 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:35:59.241381 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:35:59.246170 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:35:59.248773 systemd-resolved[1307]: Defaulting to hostname 'linux'. Sep 9 00:35:59.257705 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:35:59.258746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:35:59.259687 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:35:59.271156 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:35:59.288771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:59.289328 systemd-networkd[1372]: lo: Link UP Sep 9 00:35:59.289587 systemd-networkd[1372]: lo: Gained carrier Sep 9 00:35:59.290340 systemd-networkd[1372]: Enumeration completed Sep 9 00:35:59.290616 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:35:59.291255 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:59.291336 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:35:59.291598 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:35:59.292754 systemd[1]: Reached target network.target - Network. Sep 9 00:35:59.292820 systemd-networkd[1372]: eth0: Link UP Sep 9 00:35:59.292824 systemd-networkd[1372]: eth0: Gained carrier Sep 9 00:35:59.292839 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:59.293502 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:35:59.295489 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:35:59.297628 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:35:59.300841 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:35:59.310732 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:35:59.311762 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Sep 9 00:35:59.313137 systemd-timesyncd[1373]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:35:59.313194 systemd-timesyncd[1373]: Initial clock synchronization to Tue 2025-09-09 00:35:59.696333 UTC. Sep 9 00:35:59.315197 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:35:59.331608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:59.346071 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:35:59.347303 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:35:59.349652 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:35:59.350513 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:35:59.351444 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:35:59.352667 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:35:59.353743 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:35:59.354681 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:35:59.355575 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:35:59.355611 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:35:59.356298 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:35:59.357772 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:35:59.360098 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:35:59.370465 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:35:59.372518 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:35:59.373808 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:35:59.374829 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:35:59.375730 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:35:59.376488 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:35:59.376522 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:35:59.377448 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:35:59.379406 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:35:59.380338 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:35:59.383263 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:35:59.385299 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:35:59.386704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:35:59.387739 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:35:59.389267 jq[1405]: false Sep 9 00:35:59.389585 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:35:59.394237 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:35:59.396261 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:35:59.401100 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:35:59.404430 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:35:59.405836 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:35:59.408801 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:35:59.414168 extend-filesystems[1406]: Found loop3 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found loop4 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found loop5 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda1 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda2 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda3 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found usr Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda4 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda6 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda7 Sep 9 00:35:59.415065 extend-filesystems[1406]: Found vda9 Sep 9 00:35:59.415065 extend-filesystems[1406]: Checking size of /dev/vda9 Sep 9 00:35:59.446800 extend-filesystems[1406]: Resized partition /dev/vda9 Sep 9 00:35:59.450974 update_engine[1415]: I20250909 00:35:59.437669 1415 main.cc:92] Flatcar Update Engine starting Sep 9 00:35:59.450974 update_engine[1415]: I20250909 00:35:59.439941 1415 update_check_scheduler.cc:74] Next update check in 8m9s Sep 9 00:35:59.418671 dbus-daemon[1404]: [system] SELinux support is enabled Sep 9 00:35:59.417767 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:35:59.453628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1351) Sep 9 00:35:59.453695 jq[1419]: true Sep 9 00:35:59.423195 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:35:59.426311 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:35:59.429863 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:35:59.430010 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:35:59.433863 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:35:59.434013 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:35:59.439872 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:35:59.440046 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:35:59.455351 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:35:59.455382 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:35:59.456548 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:35:59.456563 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:35:59.458183 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:35:59.462037 jq[1429]: true Sep 9 00:35:59.462113 tar[1426]: linux-arm64/LICENSE Sep 9 00:35:59.462113 tar[1426]: linux-arm64/helm Sep 9 00:35:59.470222 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:35:59.473916 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:35:59.478411 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:35:59.485380 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:35:59.487255 systemd-logind[1411]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:35:59.489235 systemd-logind[1411]: New seat seat0. Sep 9 00:35:59.493675 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:35:59.497560 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:35:59.509130 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:35:59.509130 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:35:59.509130 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:35:59.514599 extend-filesystems[1406]: Resized filesystem in /dev/vda9 Sep 9 00:35:59.513665 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:35:59.513898 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:35:59.535568 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:35:59.535506 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:35:59.538274 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:35:59.553703 locksmithd[1444]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:35:59.622067 containerd[1438]: time="2025-09-09T00:35:59.621978320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:35:59.645878 containerd[1438]: time="2025-09-09T00:35:59.645836040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647265 containerd[1438]: time="2025-09-09T00:35:59.647230760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647294 containerd[1438]: time="2025-09-09T00:35:59.647265760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:35:59.647294 containerd[1438]: time="2025-09-09T00:35:59.647283520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:35:59.647474 containerd[1438]: time="2025-09-09T00:35:59.647452520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:35:59.647500 containerd[1438]: time="2025-09-09T00:35:59.647477280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647564 containerd[1438]: time="2025-09-09T00:35:59.647530360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647589 containerd[1438]: time="2025-09-09T00:35:59.647571760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647778 containerd[1438]: time="2025-09-09T00:35:59.647753960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647804 containerd[1438]: time="2025-09-09T00:35:59.647776320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647804 containerd[1438]: time="2025-09-09T00:35:59.647790120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647804 containerd[1438]: time="2025-09-09T00:35:59.647799520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:59.647896 containerd[1438]: time="2025-09-09T00:35:59.647878560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:59.648095 containerd[1438]: time="2025-09-09T00:35:59.648073360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:59.648206 containerd[1438]: time="2025-09-09T00:35:59.648184840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:59.648237 containerd[1438]: time="2025-09-09T00:35:59.648204880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:35:59.648306 containerd[1438]: time="2025-09-09T00:35:59.648288360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:35:59.648354 containerd[1438]: time="2025-09-09T00:35:59.648337640Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:35:59.651807 containerd[1438]: time="2025-09-09T00:35:59.651780160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:35:59.651854 containerd[1438]: time="2025-09-09T00:35:59.651834880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:35:59.651854 containerd[1438]: time="2025-09-09T00:35:59.651850360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:35:59.651889 containerd[1438]: time="2025-09-09T00:35:59.651865800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:35:59.651945 containerd[1438]: time="2025-09-09T00:35:59.651880480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:35:59.652098 containerd[1438]: time="2025-09-09T00:35:59.652078080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:35:59.652562 containerd[1438]: time="2025-09-09T00:35:59.652350160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:35:59.652562 containerd[1438]: time="2025-09-09T00:35:59.652480000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:35:59.652562 containerd[1438]: time="2025-09-09T00:35:59.652497640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:35:59.652562 containerd[1438]: time="2025-09-09T00:35:59.652510920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:35:59.652562 containerd[1438]: time="2025-09-09T00:35:59.652524880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652562 containerd[1438]: time="2025-09-09T00:35:59.652557440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652572000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652586560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652601360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652620520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652637560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652649160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652669680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652682 containerd[1438]: time="2025-09-09T00:35:59.652683040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652694880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652712360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652739560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652758320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652771120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652783680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652796800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652810280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652821400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.652840 containerd[1438]: time="2025-09-09T00:35:59.652833760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.653004 containerd[1438]: time="2025-09-09T00:35:59.652846440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.653004 containerd[1438]: time="2025-09-09T00:35:59.652862280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:35:59.653004 containerd[1438]: time="2025-09-09T00:35:59.652883000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.653004 containerd[1438]: time="2025-09-09T00:35:59.652895600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.653004 containerd[1438]: time="2025-09-09T00:35:59.652906280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:35:59.653091 containerd[1438]: time="2025-09-09T00:35:59.653018800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:35:59.653091 containerd[1438]: time="2025-09-09T00:35:59.653055520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:35:59.653091 containerd[1438]: time="2025-09-09T00:35:59.653067680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:35:59.653091 containerd[1438]: time="2025-09-09T00:35:59.653079840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:35:59.653091 containerd[1438]: time="2025-09-09T00:35:59.653089480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.653191 containerd[1438]: time="2025-09-09T00:35:59.653102880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:35:59.653191 containerd[1438]: time="2025-09-09T00:35:59.653122120Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:35:59.653191 containerd[1438]: time="2025-09-09T00:35:59.653134880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:35:59.654547 containerd[1438]: time="2025-09-09T00:35:59.653473520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:35:59.654547 containerd[1438]: time="2025-09-09T00:35:59.653535080Z" level=info msg="Connect containerd service" Sep 9 00:35:59.654547 containerd[1438]: time="2025-09-09T00:35:59.653576880Z" level=info msg="using legacy CRI server" Sep 9 00:35:59.654547 containerd[1438]: time="2025-09-09T00:35:59.653583280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:35:59.654547 containerd[1438]: time="2025-09-09T00:35:59.653665920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:35:59.655495 containerd[1438]: time="2025-09-09T00:35:59.655460240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:35:59.655673 containerd[1438]: time="2025-09-09T00:35:59.655629680Z" level=info msg="Start subscribing containerd event" Sep 9 00:35:59.655699 containerd[1438]: time="2025-09-09T00:35:59.655688000Z" level=info msg="Start recovering state" Sep 9 00:35:59.655771 containerd[1438]: time="2025-09-09T00:35:59.655755120Z" level=info msg="Start event monitor" Sep 9 00:35:59.655799 containerd[1438]: time="2025-09-09T00:35:59.655770280Z" level=info msg="Start snapshots syncer" Sep 9 00:35:59.655799 containerd[1438]: time="2025-09-09T00:35:59.655779920Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:35:59.655799 containerd[1438]: time="2025-09-09T00:35:59.655786920Z" level=info msg="Start streaming server" Sep 9 00:35:59.655969 containerd[1438]: time="2025-09-09T00:35:59.655948520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:35:59.656008 containerd[1438]: time="2025-09-09T00:35:59.655994640Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:35:59.656058 containerd[1438]: time="2025-09-09T00:35:59.656044720Z" level=info msg="containerd successfully booted in 0.036864s" Sep 9 00:35:59.656145 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:35:59.860629 tar[1426]: linux-arm64/README.md Sep 9 00:35:59.872056 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:36:00.254631 sshd_keygen[1417]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:36:00.274481 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:36:00.291854 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:36:00.297458 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:36:00.297693 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:36:00.300253 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:36:00.312442 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:36:00.315188 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:36:00.317214 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 00:36:00.318425 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:36:00.759972 systemd-networkd[1372]: eth0: Gained IPv6LL Sep 9 00:36:00.763116 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:36:00.764628 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:36:00.778835 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:36:00.781083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:00.783008 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:36:00.797133 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:36:00.797329 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:36:00.798818 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:36:00.801790 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:36:01.385433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:01.386924 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:36:01.387863 systemd[1]: Startup finished in 512ms (kernel) + 5.110s (initrd) + 3.624s (userspace) = 9.247s. Sep 9 00:36:01.388974 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:36:01.789373 kubelet[1516]: E0909 00:36:01.789226 1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:36:01.791896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:36:01.792061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:36:05.949447 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:36:05.950786 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:54976.service - OpenSSH per-connection server daemon (10.0.0.1:54976). Sep 9 00:36:05.997735 sshd[1529]: Accepted publickey for core from 10.0.0.1 port 54976 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:36:06.002149 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:36:06.011868 systemd-logind[1411]: New session 1 of user core. Sep 9 00:36:06.012742 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:36:06.028890 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:36:06.040014 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:36:06.042422 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:36:06.053118 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:06.140093 systemd[1533]: Queued start job for default target default.target. Sep 9 00:36:06.151508 systemd[1533]: Created slice app.slice - User Application Slice. Sep 9 00:36:06.151549 systemd[1533]: Reached target paths.target - Paths. Sep 9 00:36:06.151576 systemd[1533]: Reached target timers.target - Timers. Sep 9 00:36:06.152845 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:36:06.166484 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:36:06.166607 systemd[1533]: Reached target sockets.target - Sockets. Sep 9 00:36:06.166620 systemd[1533]: Reached target basic.target - Basic System. Sep 9 00:36:06.166653 systemd[1533]: Reached target default.target - Main User Target. Sep 9 00:36:06.166680 systemd[1533]: Startup finished in 105ms. Sep 9 00:36:06.166837 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:36:06.168191 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:36:06.231221 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:54992.service - OpenSSH per-connection server daemon (10.0.0.1:54992). Sep 9 00:36:06.275652 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 54992 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:36:06.276902 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:36:06.284324 systemd-logind[1411]: New session 2 of user core. Sep 9 00:36:06.297744 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:36:06.353913 sshd[1544]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:06.367021 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:54992.service: Deactivated successfully. Sep 9 00:36:06.368506 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:36:06.369736 systemd-logind[1411]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:36:06.378831 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:55004.service - OpenSSH per-connection server daemon (10.0.0.1:55004). Sep 9 00:36:06.380234 systemd-logind[1411]: Removed session 2. Sep 9 00:36:06.408424 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 55004 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:36:06.409630 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:36:06.413815 systemd-logind[1411]: New session 3 of user core. Sep 9 00:36:06.423724 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:36:06.473677 sshd[1551]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:06.489936 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:55004.service: Deactivated successfully. Sep 9 00:36:06.491748 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:36:06.494866 systemd-logind[1411]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:36:06.503838 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:55014.service - OpenSSH per-connection server daemon (10.0.0.1:55014). Sep 9 00:36:06.506182 systemd-logind[1411]: Removed session 3. Sep 9 00:36:06.535909 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 55014 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:36:06.537117 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:36:06.541816 systemd-logind[1411]: New session 4 of user core. Sep 9 00:36:06.555748 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:36:06.616223 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:06.632810 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:55014.service: Deactivated successfully. Sep 9 00:36:06.634908 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:36:06.637766 systemd-logind[1411]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:36:06.638049 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Sep 9 00:36:06.640113 systemd-logind[1411]: Removed session 4. Sep 9 00:36:06.680743 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:36:06.682166 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:36:06.687325 systemd-logind[1411]: New session 5 of user core. Sep 9 00:36:06.703762 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:36:06.764494 sudo[1568]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:36:06.764778 sudo[1568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:36:06.780935 sudo[1568]: pam_unix(sudo:session): session closed for user root Sep 9 00:36:06.783536 sshd[1565]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:06.790084 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:55016.service: Deactivated successfully. Sep 9 00:36:06.791937 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:36:06.793732 systemd-logind[1411]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:36:06.795037 systemd-logind[1411]: Removed session 5. Sep 9 00:36:06.810005 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:55030.service - OpenSSH per-connection server daemon (10.0.0.1:55030). Sep 9 00:36:06.851270 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 55030 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:36:06.853314 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:36:06.859204 systemd-logind[1411]: New session 6 of user core. Sep 9 00:36:06.871806 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:36:06.926410 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:36:06.926703 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:36:06.930650 sudo[1577]: pam_unix(sudo:session): session closed for user root Sep 9 00:36:06.937753 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:36:06.938387 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:36:06.960240 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:36:06.962471 auditctl[1580]: No rules Sep 9 00:36:06.963450 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:36:06.963671 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:36:06.965520 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:36:06.991604 augenrules[1598]: No rules Sep 9 00:36:06.996628 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:36:06.998495 sudo[1576]: pam_unix(sudo:session): session closed for user root Sep 9 00:36:07.001777 sshd[1573]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:07.007117 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:55030.service: Deactivated successfully. Sep 9 00:36:07.008467 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:36:07.012154 systemd-logind[1411]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:36:07.017823 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:55042.service - OpenSSH per-connection server daemon (10.0.0.1:55042). Sep 9 00:36:07.020105 systemd-logind[1411]: Removed session 6. Sep 9 00:36:07.055328 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 55042 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:36:07.056942 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:36:07.060866 systemd-logind[1411]: New session 7 of user core. Sep 9 00:36:07.073738 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:36:07.127434 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:36:07.128448 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:36:07.397876 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:36:07.398032 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:36:07.610018 dockerd[1628]: time="2025-09-09T00:36:07.609955356Z" level=info msg="Starting up" Sep 9 00:36:07.749423 dockerd[1628]: time="2025-09-09T00:36:07.749320631Z" level=info msg="Loading containers: start." Sep 9 00:36:07.840731 kernel: Initializing XFRM netlink socket Sep 9 00:36:07.924431 systemd-networkd[1372]: docker0: Link UP Sep 9 00:36:07.947935 dockerd[1628]: time="2025-09-09T00:36:07.947884900Z" level=info msg="Loading containers: done." Sep 9 00:36:07.980813 dockerd[1628]: time="2025-09-09T00:36:07.980254636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:36:07.980813 dockerd[1628]: time="2025-09-09T00:36:07.980414019Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:36:07.980813 dockerd[1628]: time="2025-09-09T00:36:07.980535788Z" level=info msg="Daemon has completed initialization" Sep 9 00:36:08.009751 dockerd[1628]: time="2025-09-09T00:36:08.009469506Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:36:08.009907 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:36:08.733867 containerd[1438]: time="2025-09-09T00:36:08.733815129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:36:09.353357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864980950.mount: Deactivated successfully. Sep 9 00:36:10.267928 containerd[1438]: time="2025-09-09T00:36:10.267864268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:10.268435 containerd[1438]: time="2025-09-09T00:36:10.268395196Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 9 00:36:10.269926 containerd[1438]: time="2025-09-09T00:36:10.269876353Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:10.273562 containerd[1438]: time="2025-09-09T00:36:10.273208348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:10.274603 containerd[1438]: time="2025-09-09T00:36:10.274257730Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.540399866s" Sep 9 00:36:10.274603 containerd[1438]: time="2025-09-09T00:36:10.274295723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 00:36:10.275647 containerd[1438]: time="2025-09-09T00:36:10.275623206Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:36:11.441628 containerd[1438]: time="2025-09-09T00:36:11.441572154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:11.442844 containerd[1438]: time="2025-09-09T00:36:11.442812646Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 9 00:36:11.443847 containerd[1438]: time="2025-09-09T00:36:11.443795932Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:11.446715 containerd[1438]: time="2025-09-09T00:36:11.446675989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:11.447940 containerd[1438]: time="2025-09-09T00:36:11.447912275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.172258744s" Sep 9 00:36:11.448023 containerd[1438]: time="2025-09-09T00:36:11.447943213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 00:36:11.448871 containerd[1438]: time="2025-09-09T00:36:11.448421146Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:36:12.042384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:36:12.053729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:12.185156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:12.189101 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:36:12.280142 kubelet[1843]: E0909 00:36:12.280092 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:36:12.284216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:36:12.284398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:36:12.659891 containerd[1438]: time="2025-09-09T00:36:12.659836479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:12.661245 containerd[1438]: time="2025-09-09T00:36:12.661210410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 9 00:36:12.662190 containerd[1438]: time="2025-09-09T00:36:12.662142398Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:12.665657 containerd[1438]: time="2025-09-09T00:36:12.665624561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:12.669226 containerd[1438]: time="2025-09-09T00:36:12.669168110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.220713373s" Sep 9 00:36:12.669226 containerd[1438]: time="2025-09-09T00:36:12.669225296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 00:36:12.669908 containerd[1438]: time="2025-09-09T00:36:12.669876319Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:36:13.693124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155816214.mount: Deactivated successfully. Sep 9 00:36:13.930177 containerd[1438]: time="2025-09-09T00:36:13.930121905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:13.930696 containerd[1438]: time="2025-09-09T00:36:13.930666546Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 00:36:13.931571 containerd[1438]: time="2025-09-09T00:36:13.931531307Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:13.933358 containerd[1438]: time="2025-09-09T00:36:13.933334203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:13.934822 containerd[1438]: time="2025-09-09T00:36:13.934791001Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.264877095s" Sep 9 00:36:13.934822 containerd[1438]: time="2025-09-09T00:36:13.934822343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 00:36:13.937916 containerd[1438]: time="2025-09-09T00:36:13.935608607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:36:14.540097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317359154.mount: Deactivated successfully. Sep 9 00:36:15.397141 containerd[1438]: time="2025-09-09T00:36:15.397087265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:15.397773 containerd[1438]: time="2025-09-09T00:36:15.397742112Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 9 00:36:15.398633 containerd[1438]: time="2025-09-09T00:36:15.398605578Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:15.402315 containerd[1438]: time="2025-09-09T00:36:15.402282798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:15.404174 containerd[1438]: time="2025-09-09T00:36:15.404142262Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.468493941s" Sep 9 00:36:15.404226 containerd[1438]: time="2025-09-09T00:36:15.404187714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 00:36:15.404884 containerd[1438]: time="2025-09-09T00:36:15.404847512Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:36:15.831271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304871586.mount: Deactivated successfully. Sep 9 00:36:15.835967 containerd[1438]: time="2025-09-09T00:36:15.835928285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:15.836414 containerd[1438]: time="2025-09-09T00:36:15.836382965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 00:36:15.837218 containerd[1438]: time="2025-09-09T00:36:15.837179079Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:15.839144 containerd[1438]: time="2025-09-09T00:36:15.839111451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:15.840211 containerd[1438]: time="2025-09-09T00:36:15.840084582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.200032ms" Sep 9 00:36:15.840211 containerd[1438]: time="2025-09-09T00:36:15.840122345Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:36:15.840711 containerd[1438]: time="2025-09-09T00:36:15.840661729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:36:16.264620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389686683.mount: Deactivated successfully. Sep 9 00:36:17.744604 containerd[1438]: time="2025-09-09T00:36:17.744534109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:17.746015 containerd[1438]: time="2025-09-09T00:36:17.745982362Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 9 00:36:17.746627 containerd[1438]: time="2025-09-09T00:36:17.746594136Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:17.750019 containerd[1438]: time="2025-09-09T00:36:17.749984353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:17.751493 containerd[1438]: time="2025-09-09T00:36:17.751362983Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.910665116s" Sep 9 00:36:17.751493 containerd[1438]: time="2025-09-09T00:36:17.751399644Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 00:36:21.865800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:21.874769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:21.894523 systemd[1]: Reloading requested from client PID 2007 ('systemctl') (unit session-7.scope)... Sep 9 00:36:21.894561 systemd[1]: Reloading... Sep 9 00:36:21.959573 zram_generator::config[2046]: No configuration found. Sep 9 00:36:22.071235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:36:22.125988 systemd[1]: Reloading finished in 231 ms. Sep 9 00:36:22.166516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:22.168738 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:22.170292 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:36:22.170474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:22.171800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:22.269252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:22.272721 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:36:22.304562 kubelet[2093]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:22.304562 kubelet[2093]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:36:22.304562 kubelet[2093]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:22.304562 kubelet[2093]: I0909 00:36:22.304297 2093 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:36:22.945668 kubelet[2093]: I0909 00:36:22.945625 2093 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:36:22.945668 kubelet[2093]: I0909 00:36:22.945656 2093 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:36:22.945915 kubelet[2093]: I0909 00:36:22.945891 2093 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:36:22.968103 kubelet[2093]: I0909 00:36:22.967961 2093 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:36:22.971422 kubelet[2093]: E0909 00:36:22.970947 2093 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:36:22.977417 kubelet[2093]: E0909 00:36:22.977381 2093 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:36:22.977510 kubelet[2093]: I0909 00:36:22.977498 2093 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:36:22.979880 kubelet[2093]: I0909 00:36:22.979863 2093 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:36:22.980882 kubelet[2093]: I0909 00:36:22.980852 2093 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:36:22.981091 kubelet[2093]: I0909 00:36:22.980954 2093 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:36:22.981271 kubelet[2093]: I0909 00:36:22.981260 2093 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:36:22.981325 kubelet[2093]: I0909 00:36:22.981317 2093 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:36:22.981536 kubelet[2093]: I0909 00:36:22.981523 2093 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:22.983987 kubelet[2093]: I0909 00:36:22.983968 2093 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:36:22.984065 kubelet[2093]: I0909 00:36:22.984055 2093 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:36:22.984153 kubelet[2093]: I0909 00:36:22.984143 2093 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:36:22.985807 kubelet[2093]: I0909 00:36:22.985791 2093 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:36:22.987292 kubelet[2093]: I0909 00:36:22.987274 2093 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:36:22.988146 kubelet[2093]: I0909 00:36:22.988130 2093 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:36:22.988273 kubelet[2093]: E0909 00:36:22.988190 2093 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:36:22.989044 kubelet[2093]: E0909 00:36:22.988333 2093 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:36:22.989044 kubelet[2093]: W0909 00:36:22.988351 2093 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:36:22.990890 kubelet[2093]: I0909 00:36:22.990863 2093 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:36:22.990950 kubelet[2093]: I0909 00:36:22.990904 2093 server.go:1289] "Started kubelet" Sep 9 00:36:22.993567 kubelet[2093]: I0909 00:36:22.991025 2093 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:36:22.993567 kubelet[2093]: I0909 00:36:22.992018 2093 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:36:22.993567 kubelet[2093]: I0909 00:36:22.992695 2093 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:36:22.993567 kubelet[2093]: I0909 00:36:22.992928 2093 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:36:22.994498 kubelet[2093]: I0909 00:36:22.994480 2093 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:36:22.994618 kubelet[2093]: I0909 00:36:22.994501 2093 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:36:22.994618 kubelet[2093]: E0909 00:36:22.993202 2093 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186376257d3155d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:36:22.990878161 +0000 UTC m=+0.714710096,LastTimestamp:2025-09-09 00:36:22.990878161 +0000 UTC m=+0.714710096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:36:22.994618 kubelet[2093]: I0909 00:36:22.994563 2093 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:36:22.995083 kubelet[2093]: I0909 00:36:22.994940 2093 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:36:22.995083 kubelet[2093]: I0909 00:36:22.994984 2093 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:36:22.995083 kubelet[2093]: E0909 00:36:22.995012 2093 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:22.995588 kubelet[2093]: E0909 00:36:22.995557 2093 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Sep 9 00:36:22.995739 kubelet[2093]: I0909 00:36:22.995728 2093 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:36:22.995809 kubelet[2093]: I0909 00:36:22.995791 2093 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:36:22.997114 kubelet[2093]: E0909 00:36:22.997041 2093 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:36:22.997258 kubelet[2093]: I0909 00:36:22.997193 2093 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:36:23.004112 kubelet[2093]: E0909 00:36:23.002817 2093 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:36:23.008068 kubelet[2093]: I0909 00:36:23.008049 2093 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:36:23.008068 kubelet[2093]: I0909 00:36:23.008062 2093 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:36:23.008148 kubelet[2093]: I0909 00:36:23.008082 2093 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:23.010187 kubelet[2093]: I0909 00:36:23.010159 2093 policy_none.go:49] "None policy: Start" Sep 9 00:36:23.010187 kubelet[2093]: I0909 00:36:23.010182 2093 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:36:23.010187 kubelet[2093]: I0909 00:36:23.010192 2093 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:36:23.015746 kubelet[2093]: I0909 00:36:23.015655 2093 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:36:23.016657 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:36:23.016988 kubelet[2093]: I0909 00:36:23.016890 2093 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:36:23.016988 kubelet[2093]: I0909 00:36:23.016914 2093 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:36:23.016988 kubelet[2093]: I0909 00:36:23.016930 2093 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:36:23.016988 kubelet[2093]: I0909 00:36:23.016937 2093 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:36:23.016988 kubelet[2093]: E0909 00:36:23.016971 2093 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:36:23.018004 kubelet[2093]: E0909 00:36:23.017797 2093 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:36:23.026144 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:36:23.029809 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:36:23.039223 kubelet[2093]: E0909 00:36:23.039194 2093 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:36:23.039471 kubelet[2093]: I0909 00:36:23.039357 2093 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:36:23.039471 kubelet[2093]: I0909 00:36:23.039371 2093 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:36:23.039846 kubelet[2093]: I0909 00:36:23.039654 2093 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:36:23.040969 kubelet[2093]: E0909 00:36:23.040954 2093 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:36:23.041093 kubelet[2093]: E0909 00:36:23.041082 2093 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:36:23.126817 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:36:23.136280 kubelet[2093]: E0909 00:36:23.136239 2093 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:36:23.139005 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:36:23.140849 kubelet[2093]: I0909 00:36:23.140656 2093 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:36:23.141094 kubelet[2093]: E0909 00:36:23.141058 2093 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 9 00:36:23.149560 kubelet[2093]: E0909 00:36:23.149495 2093 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:36:23.151625 systemd[1]: Created slice kubepods-burstable-podcedc20a502482210b01ec0d37ead1615.slice - libcontainer container kubepods-burstable-podcedc20a502482210b01ec0d37ead1615.slice. Sep 9 00:36:23.152811 kubelet[2093]: E0909 00:36:23.152791 2093 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:36:23.196341 kubelet[2093]: E0909 00:36:23.196255 2093 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Sep 9 00:36:23.297011 kubelet[2093]: I0909 00:36:23.296950 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:23.297011 kubelet[2093]: I0909 00:36:23.296989 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cedc20a502482210b01ec0d37ead1615-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedc20a502482210b01ec0d37ead1615\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:23.297011 kubelet[2093]: I0909 00:36:23.297010 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cedc20a502482210b01ec0d37ead1615-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cedc20a502482210b01ec0d37ead1615\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:23.297168 kubelet[2093]: I0909 00:36:23.297078 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:23.297168 kubelet[2093]: I0909 00:36:23.297107 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:23.297168 kubelet[2093]: I0909 00:36:23.297124 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:23.297168 kubelet[2093]: I0909 00:36:23.297163 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:23.297245 kubelet[2093]: I0909 00:36:23.297180 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cedc20a502482210b01ec0d37ead1615-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedc20a502482210b01ec0d37ead1615\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:23.297245 kubelet[2093]: I0909 00:36:23.297195 2093 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:23.343241 kubelet[2093]: I0909 00:36:23.343211 2093 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:36:23.343728 kubelet[2093]: E0909 00:36:23.343689 2093 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 9 00:36:23.437475 kubelet[2093]: E0909 00:36:23.437438 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:23.438096 containerd[1438]: time="2025-09-09T00:36:23.438060049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:23.450725 kubelet[2093]: E0909 00:36:23.450256 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:23.451646 containerd[1438]: time="2025-09-09T00:36:23.451608719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:23.453970 kubelet[2093]: E0909 00:36:23.453950 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:23.454232 containerd[1438]: time="2025-09-09T00:36:23.454211287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cedc20a502482210b01ec0d37ead1615,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:23.597706 kubelet[2093]: E0909 00:36:23.597667 2093 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Sep 9 00:36:23.745772 kubelet[2093]: I0909 00:36:23.745427 2093 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:36:23.745772 kubelet[2093]: E0909 00:36:23.745755 2093 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 9 00:36:23.953390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000665165.mount: Deactivated successfully. Sep 9 00:36:23.959573 containerd[1438]: time="2025-09-09T00:36:23.959450288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:23.960699 containerd[1438]: time="2025-09-09T00:36:23.960674481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:36:23.961421 containerd[1438]: time="2025-09-09T00:36:23.961393274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:23.962609 containerd[1438]: time="2025-09-09T00:36:23.962569361Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:23.963841 containerd[1438]: time="2025-09-09T00:36:23.963728971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:36:23.964045 containerd[1438]: time="2025-09-09T00:36:23.964023504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:23.964593 containerd[1438]: time="2025-09-09T00:36:23.964387511Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 9 00:36:23.965678 containerd[1438]: time="2025-09-09T00:36:23.965620123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:23.969174 containerd[1438]: time="2025-09-09T00:36:23.969012482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 514.753049ms" Sep 9 00:36:23.970558 containerd[1438]: time="2025-09-09T00:36:23.970520504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 518.842272ms" Sep 9 00:36:23.972867 containerd[1438]: time="2025-09-09T00:36:23.972838202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.699098ms" Sep 9 00:36:24.074368 containerd[1438]: time="2025-09-09T00:36:24.073805209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:24.074368 containerd[1438]: time="2025-09-09T00:36:24.074230153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:24.074368 containerd[1438]: time="2025-09-09T00:36:24.074244822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:24.075166 containerd[1438]: time="2025-09-09T00:36:24.075118797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:24.078774 containerd[1438]: time="2025-09-09T00:36:24.078477272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:24.078774 containerd[1438]: time="2025-09-09T00:36:24.078516068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:24.078774 containerd[1438]: time="2025-09-09T00:36:24.078526608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:24.078774 containerd[1438]: time="2025-09-09T00:36:24.078600471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:24.078967 containerd[1438]: time="2025-09-09T00:36:24.078492622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:24.078967 containerd[1438]: time="2025-09-09T00:36:24.078568650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:24.078967 containerd[1438]: time="2025-09-09T00:36:24.078580112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:24.078967 containerd[1438]: time="2025-09-09T00:36:24.078644196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:24.102776 systemd[1]: Started cri-containerd-2027a7d9e62b8712b875ba9af999b4f2143fdf9614c1a17beaf593f478586b26.scope - libcontainer container 2027a7d9e62b8712b875ba9af999b4f2143fdf9614c1a17beaf593f478586b26. Sep 9 00:36:24.104729 systemd[1]: Started cri-containerd-adda034a3b74622bfaf23cd7cfe5cfd567359ffd8e550dc5f27a66e27b9e985b.scope - libcontainer container adda034a3b74622bfaf23cd7cfe5cfd567359ffd8e550dc5f27a66e27b9e985b. Sep 9 00:36:24.109347 systemd[1]: Started cri-containerd-71fb564287da1b51bc8988197808f04ee7e925da82e1ec6b319cfe8064e7491f.scope - libcontainer container 71fb564287da1b51bc8988197808f04ee7e925da82e1ec6b319cfe8064e7491f. Sep 9 00:36:24.134204 kubelet[2093]: E0909 00:36:24.134156 2093 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:36:24.139469 containerd[1438]: time="2025-09-09T00:36:24.139425949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"adda034a3b74622bfaf23cd7cfe5cfd567359ffd8e550dc5f27a66e27b9e985b\"" Sep 9 00:36:24.139935 containerd[1438]: time="2025-09-09T00:36:24.139911691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2027a7d9e62b8712b875ba9af999b4f2143fdf9614c1a17beaf593f478586b26\"" Sep 9 00:36:24.141252 kubelet[2093]: E0909 00:36:24.141231 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:24.141345 kubelet[2093]: E0909 00:36:24.141267 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:24.145900 containerd[1438]: time="2025-09-09T00:36:24.145793021Z" level=info msg="CreateContainer within sandbox \"2027a7d9e62b8712b875ba9af999b4f2143fdf9614c1a17beaf593f478586b26\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:36:24.146172 containerd[1438]: time="2025-09-09T00:36:24.146122660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cedc20a502482210b01ec0d37ead1615,Namespace:kube-system,Attempt:0,} returns sandbox id \"71fb564287da1b51bc8988197808f04ee7e925da82e1ec6b319cfe8064e7491f\"" Sep 9 00:36:24.146783 kubelet[2093]: E0909 00:36:24.146758 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:24.146849 containerd[1438]: time="2025-09-09T00:36:24.146766149Z" level=info msg="CreateContainer within sandbox \"adda034a3b74622bfaf23cd7cfe5cfd567359ffd8e550dc5f27a66e27b9e985b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:36:24.150871 containerd[1438]: time="2025-09-09T00:36:24.150735769Z" level=info msg="CreateContainer within sandbox \"71fb564287da1b51bc8988197808f04ee7e925da82e1ec6b319cfe8064e7491f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:36:24.160291 containerd[1438]: time="2025-09-09T00:36:24.160250267Z" level=info msg="CreateContainer within sandbox \"2027a7d9e62b8712b875ba9af999b4f2143fdf9614c1a17beaf593f478586b26\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1348c32748d3688b1bab050db3788d3090dac7796800901e35951e33b6ecf90b\"" Sep 9 00:36:24.161015 containerd[1438]: time="2025-09-09T00:36:24.160987056Z" level=info msg="StartContainer for \"1348c32748d3688b1bab050db3788d3090dac7796800901e35951e33b6ecf90b\"" Sep 9 00:36:24.166893 containerd[1438]: time="2025-09-09T00:36:24.166775165Z" level=info msg="CreateContainer within sandbox \"adda034a3b74622bfaf23cd7cfe5cfd567359ffd8e550dc5f27a66e27b9e985b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a0fbd93c7a782d01d4a13a9cc90bf90ae93200b3dda938146ee0d4df8fbad8a\"" Sep 9 00:36:24.167304 containerd[1438]: time="2025-09-09T00:36:24.167277780Z" level=info msg="StartContainer for \"9a0fbd93c7a782d01d4a13a9cc90bf90ae93200b3dda938146ee0d4df8fbad8a\"" Sep 9 00:36:24.170844 containerd[1438]: time="2025-09-09T00:36:24.170803860Z" level=info msg="CreateContainer within sandbox \"71fb564287da1b51bc8988197808f04ee7e925da82e1ec6b319cfe8064e7491f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dca41806d5d47623c20b6502ae191c0762cf9f3458b3619dccb921c9b2885c8d\"" Sep 9 00:36:24.172819 containerd[1438]: time="2025-09-09T00:36:24.172667475Z" level=info msg="StartContainer for \"dca41806d5d47623c20b6502ae191c0762cf9f3458b3619dccb921c9b2885c8d\"" Sep 9 00:36:24.187725 systemd[1]: Started cri-containerd-1348c32748d3688b1bab050db3788d3090dac7796800901e35951e33b6ecf90b.scope - libcontainer container 1348c32748d3688b1bab050db3788d3090dac7796800901e35951e33b6ecf90b. Sep 9 00:36:24.203702 systemd[1]: Started cri-containerd-9a0fbd93c7a782d01d4a13a9cc90bf90ae93200b3dda938146ee0d4df8fbad8a.scope - libcontainer container 9a0fbd93c7a782d01d4a13a9cc90bf90ae93200b3dda938146ee0d4df8fbad8a. Sep 9 00:36:24.206867 systemd[1]: Started cri-containerd-dca41806d5d47623c20b6502ae191c0762cf9f3458b3619dccb921c9b2885c8d.scope - libcontainer container dca41806d5d47623c20b6502ae191c0762cf9f3458b3619dccb921c9b2885c8d. Sep 9 00:36:24.241473 containerd[1438]: time="2025-09-09T00:36:24.241428387Z" level=info msg="StartContainer for \"9a0fbd93c7a782d01d4a13a9cc90bf90ae93200b3dda938146ee0d4df8fbad8a\" returns successfully" Sep 9 00:36:24.241712 containerd[1438]: time="2025-09-09T00:36:24.241528502Z" level=info msg="StartContainer for \"1348c32748d3688b1bab050db3788d3090dac7796800901e35951e33b6ecf90b\" returns successfully" Sep 9 00:36:24.248121 containerd[1438]: time="2025-09-09T00:36:24.247926273Z" level=info msg="StartContainer for \"dca41806d5d47623c20b6502ae191c0762cf9f3458b3619dccb921c9b2885c8d\" returns successfully" Sep 9 00:36:24.547247 kubelet[2093]: I0909 00:36:24.547139 2093 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:36:25.036641 kubelet[2093]: E0909 00:36:25.036530 2093 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:36:25.036834 kubelet[2093]: E0909 00:36:25.036807 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:25.037937 kubelet[2093]: E0909 00:36:25.037914 2093 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:36:25.038168 kubelet[2093]: E0909 00:36:25.038009 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:25.039332 kubelet[2093]: E0909 00:36:25.039303 2093 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:36:25.039447 kubelet[2093]: E0909 00:36:25.039402 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:25.583580 kubelet[2093]: E0909 00:36:25.582747 2093 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:36:25.669051 kubelet[2093]: I0909 00:36:25.669013 2093 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:36:25.696090 kubelet[2093]: I0909 00:36:25.696059 2093 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:25.701395 kubelet[2093]: E0909 00:36:25.701358 2093 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:25.701395 kubelet[2093]: I0909 00:36:25.701391 2093 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:25.703243 kubelet[2093]: E0909 00:36:25.703217 2093 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:25.703243 kubelet[2093]: I0909 00:36:25.703243 2093 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:25.704761 kubelet[2093]: E0909 00:36:25.704732 2093 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:25.988489 kubelet[2093]: I0909 00:36:25.988372 2093 apiserver.go:52] "Watching apiserver" Sep 9 00:36:25.995286 kubelet[2093]: I0909 00:36:25.995259 2093 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:36:26.039869 kubelet[2093]: I0909 00:36:26.039840 2093 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:26.039979 kubelet[2093]: I0909 00:36:26.039956 2093 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:26.040619 kubelet[2093]: I0909 00:36:26.040604 2093 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:26.042003 kubelet[2093]: E0909 00:36:26.041980 2093 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:26.042150 kubelet[2093]: E0909 00:36:26.042131 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:26.043161 kubelet[2093]: E0909 00:36:26.043138 2093 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:26.043252 kubelet[2093]: E0909 00:36:26.043227 2093 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:26.043280 kubelet[2093]: E0909 00:36:26.043266 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:26.043376 kubelet[2093]: E0909 00:36:26.043360 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:27.718123 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-7.scope)... Sep 9 00:36:27.718138 systemd[1]: Reloading... Sep 9 00:36:27.780576 zram_generator::config[2426]: No configuration found. Sep 9 00:36:27.942692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:36:28.010646 systemd[1]: Reloading finished in 292 ms. Sep 9 00:36:28.041173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:28.058811 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:36:28.059062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:28.059121 systemd[1]: kubelet.service: Consumed 1.003s CPU time, 128.1M memory peak, 0B memory swap peak. Sep 9 00:36:28.069776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:28.169327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:28.173237 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:36:28.214047 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:28.214047 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:36:28.214047 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:28.214386 kubelet[2465]: I0909 00:36:28.214123 2465 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:36:28.219406 kubelet[2465]: I0909 00:36:28.219368 2465 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:36:28.220563 kubelet[2465]: I0909 00:36:28.219502 2465 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:36:28.220563 kubelet[2465]: I0909 00:36:28.219750 2465 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:36:28.222777 kubelet[2465]: I0909 00:36:28.222752 2465 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:36:28.225150 kubelet[2465]: I0909 00:36:28.225114 2465 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:36:28.228292 kubelet[2465]: E0909 00:36:28.228264 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:36:28.228377 kubelet[2465]: I0909 00:36:28.228365 2465 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:36:28.231004 kubelet[2465]: I0909 00:36:28.230977 2465 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:36:28.231306 kubelet[2465]: I0909 00:36:28.231269 2465 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:36:28.231575 kubelet[2465]: I0909 00:36:28.231356 2465 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:36:28.231739 kubelet[2465]: I0909 00:36:28.231720 2465 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:36:28.231797 kubelet[2465]: I0909 00:36:28.231788 2465 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:36:28.231908 kubelet[2465]: I0909 00:36:28.231897 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:28.232145 kubelet[2465]: I0909 00:36:28.232128 2465 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:36:28.232213 kubelet[2465]: I0909 00:36:28.232203 2465 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:36:28.232279 kubelet[2465]: I0909 00:36:28.232271 2465 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:36:28.232345 kubelet[2465]: I0909 00:36:28.232335 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:36:28.233486 kubelet[2465]: I0909 00:36:28.233457 2465 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:36:28.235398 kubelet[2465]: I0909 00:36:28.235370 2465 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:36:28.242956 kubelet[2465]: I0909 00:36:28.242017 2465 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:36:28.242956 kubelet[2465]: I0909 00:36:28.242060 2465 server.go:1289] "Started kubelet" Sep 9 00:36:28.242956 kubelet[2465]: I0909 00:36:28.242480 2465 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:36:28.242956 kubelet[2465]: I0909 00:36:28.242804 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:36:28.243682 kubelet[2465]: I0909 00:36:28.243662 2465 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:36:28.245232 kubelet[2465]: I0909 00:36:28.245205 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:36:28.245322 kubelet[2465]: I0909 00:36:28.245306 2465 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:36:28.246088 kubelet[2465]: I0909 00:36:28.245716 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:36:28.246088 kubelet[2465]: I0909 00:36:28.245899 2465 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:36:28.246088 kubelet[2465]: E0909 00:36:28.246012 2465 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:28.246328 kubelet[2465]: I0909 00:36:28.246288 2465 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:36:28.246429 kubelet[2465]: I0909 00:36:28.246413 2465 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:36:28.251168 kubelet[2465]: I0909 00:36:28.251140 2465 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:36:28.251278 kubelet[2465]: I0909 00:36:28.251248 2465 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:36:28.260721 kubelet[2465]: I0909 00:36:28.257858 2465 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:36:28.263468 kubelet[2465]: I0909 00:36:28.263364 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:36:28.265997 kubelet[2465]: I0909 00:36:28.265935 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:36:28.265997 kubelet[2465]: I0909 00:36:28.265961 2465 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:36:28.266472 kubelet[2465]: I0909 00:36:28.266422 2465 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:36:28.266472 kubelet[2465]: I0909 00:36:28.266447 2465 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:36:28.266592 kubelet[2465]: E0909 00:36:28.266497 2465 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:36:28.296967 kubelet[2465]: I0909 00:36:28.296942 2465 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297114 2465 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297140 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297264 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297274 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297290 2465 policy_none.go:49] "None policy: Start" Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297299 2465 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297308 2465 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:36:28.297934 kubelet[2465]: I0909 00:36:28.297384 2465 state_mem.go:75] "Updated machine memory state" Sep 9 00:36:28.300858 kubelet[2465]: E0909 00:36:28.300825 2465 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:36:28.301122 kubelet[2465]: I0909 00:36:28.301099 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:36:28.301159 kubelet[2465]: I0909 00:36:28.301123 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:36:28.301627 kubelet[2465]: I0909 00:36:28.301320 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:36:28.302260 kubelet[2465]: E0909 00:36:28.302232 2465 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:36:28.367732 kubelet[2465]: I0909 00:36:28.367699 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:28.367876 kubelet[2465]: I0909 00:36:28.367782 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:28.367876 kubelet[2465]: I0909 00:36:28.367817 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:28.405233 kubelet[2465]: I0909 00:36:28.405207 2465 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:36:28.411646 kubelet[2465]: I0909 00:36:28.411617 2465 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:36:28.411768 kubelet[2465]: I0909 00:36:28.411694 2465 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:36:28.448554 kubelet[2465]: I0909 00:36:28.448503 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:28.448554 kubelet[2465]: I0909 00:36:28.448557 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:28.448681 kubelet[2465]: I0909 00:36:28.448576 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:28.448681 kubelet[2465]: I0909 00:36:28.448591 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cedc20a502482210b01ec0d37ead1615-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedc20a502482210b01ec0d37ead1615\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:28.448681 kubelet[2465]: I0909 00:36:28.448606 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cedc20a502482210b01ec0d37ead1615-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedc20a502482210b01ec0d37ead1615\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:28.448681 kubelet[2465]: I0909 00:36:28.448621 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:28.448681 kubelet[2465]: I0909 00:36:28.448649 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:28.448819 kubelet[2465]: I0909 00:36:28.448688 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:28.448819 kubelet[2465]: I0909 00:36:28.448726 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cedc20a502482210b01ec0d37ead1615-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cedc20a502482210b01ec0d37ead1615\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:28.675038 kubelet[2465]: E0909 00:36:28.674920 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:28.675038 kubelet[2465]: E0909 00:36:28.674916 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:28.675317 kubelet[2465]: E0909 00:36:28.675104 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:29.233114 kubelet[2465]: I0909 00:36:29.233076 2465 apiserver.go:52] "Watching apiserver" Sep 9 00:36:29.246557 kubelet[2465]: I0909 00:36:29.246513 2465 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:36:29.284818 kubelet[2465]: I0909 00:36:29.284395 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:29.285647 kubelet[2465]: I0909 00:36:29.285194 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:29.285647 kubelet[2465]: E0909 00:36:29.285384 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:29.310293 kubelet[2465]: E0909 00:36:29.310233 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:29.310422 kubelet[2465]: E0909 00:36:29.310410 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:29.346056 kubelet[2465]: I0909 00:36:29.345698 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.345684136 podStartE2EDuration="1.345684136s" podCreationTimestamp="2025-09-09 00:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:29.345621154 +0000 UTC m=+1.168820638" watchObservedRunningTime="2025-09-09 00:36:29.345684136 +0000 UTC m=+1.168883580" Sep 9 00:36:29.346917 kubelet[2465]: E0909 00:36:29.346515 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:29.346917 kubelet[2465]: E0909 00:36:29.346692 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:29.360799 kubelet[2465]: I0909 00:36:29.358677 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.35866054 podStartE2EDuration="1.35866054s" podCreationTimestamp="2025-09-09 00:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:29.352216923 +0000 UTC m=+1.175416407" watchObservedRunningTime="2025-09-09 00:36:29.35866054 +0000 UTC m=+1.181859984" Sep 9 00:36:29.368135 kubelet[2465]: I0909 00:36:29.368088 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.368072874 podStartE2EDuration="1.368072874s" podCreationTimestamp="2025-09-09 00:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:29.361296165 +0000 UTC m=+1.184495649" watchObservedRunningTime="2025-09-09 00:36:29.368072874 +0000 UTC m=+1.191272358" Sep 9 00:36:30.286368 kubelet[2465]: E0909 00:36:30.286318 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:30.286723 kubelet[2465]: E0909 00:36:30.286586 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:31.327004 kubelet[2465]: E0909 00:36:31.326972 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:34.017075 kubelet[2465]: E0909 00:36:34.017035 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:34.293627 kubelet[2465]: E0909 00:36:34.293394 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:35.293790 kubelet[2465]: I0909 00:36:35.293624 2465 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:36:35.294114 containerd[1438]: time="2025-09-09T00:36:35.294042792Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:36:35.294368 kubelet[2465]: I0909 00:36:35.294245 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:36:36.272166 systemd[1]: Created slice kubepods-besteffort-podfca2a0af_5236_49b6_aa25_63a3744b11e0.slice - libcontainer container kubepods-besteffort-podfca2a0af_5236_49b6_aa25_63a3744b11e0.slice. Sep 9 00:36:36.299871 kubelet[2465]: I0909 00:36:36.299798 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fca2a0af-5236-49b6-aa25-63a3744b11e0-kube-proxy\") pod \"kube-proxy-xjqzr\" (UID: \"fca2a0af-5236-49b6-aa25-63a3744b11e0\") " pod="kube-system/kube-proxy-xjqzr" Sep 9 00:36:36.299871 kubelet[2465]: I0909 00:36:36.299839 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fca2a0af-5236-49b6-aa25-63a3744b11e0-xtables-lock\") pod \"kube-proxy-xjqzr\" (UID: \"fca2a0af-5236-49b6-aa25-63a3744b11e0\") " pod="kube-system/kube-proxy-xjqzr" Sep 9 00:36:36.300502 kubelet[2465]: I0909 00:36:36.299926 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fca2a0af-5236-49b6-aa25-63a3744b11e0-lib-modules\") pod \"kube-proxy-xjqzr\" (UID: \"fca2a0af-5236-49b6-aa25-63a3744b11e0\") " pod="kube-system/kube-proxy-xjqzr" Sep 9 00:36:36.300502 kubelet[2465]: I0909 00:36:36.299991 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtbwh\" (UniqueName: \"kubernetes.io/projected/fca2a0af-5236-49b6-aa25-63a3744b11e0-kube-api-access-rtbwh\") pod \"kube-proxy-xjqzr\" (UID: \"fca2a0af-5236-49b6-aa25-63a3744b11e0\") " pod="kube-system/kube-proxy-xjqzr" Sep 9 00:36:36.499114 systemd[1]: Created slice kubepods-besteffort-podaaffe4cd_f84a_48de_8e1e_80ff4434a53c.slice - libcontainer container kubepods-besteffort-podaaffe4cd_f84a_48de_8e1e_80ff4434a53c.slice. Sep 9 00:36:36.581720 kubelet[2465]: E0909 00:36:36.581560 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:36.582419 containerd[1438]: time="2025-09-09T00:36:36.582267805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xjqzr,Uid:fca2a0af-5236-49b6-aa25-63a3744b11e0,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:36.599259 containerd[1438]: time="2025-09-09T00:36:36.598821094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:36.599259 containerd[1438]: time="2025-09-09T00:36:36.599237075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:36.599259 containerd[1438]: time="2025-09-09T00:36:36.599250286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:36.599465 containerd[1438]: time="2025-09-09T00:36:36.599339199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:36.602453 kubelet[2465]: I0909 00:36:36.602318 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aaffe4cd-f84a-48de-8e1e-80ff4434a53c-var-lib-calico\") pod \"tigera-operator-755d956888-rmr9p\" (UID: \"aaffe4cd-f84a-48de-8e1e-80ff4434a53c\") " pod="tigera-operator/tigera-operator-755d956888-rmr9p" Sep 9 00:36:36.602453 kubelet[2465]: I0909 00:36:36.602404 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrv55\" (UniqueName: \"kubernetes.io/projected/aaffe4cd-f84a-48de-8e1e-80ff4434a53c-kube-api-access-nrv55\") pod \"tigera-operator-755d956888-rmr9p\" (UID: \"aaffe4cd-f84a-48de-8e1e-80ff4434a53c\") " pod="tigera-operator/tigera-operator-755d956888-rmr9p" Sep 9 00:36:36.621711 systemd[1]: Started cri-containerd-a84910ea40f5973d6e125fe1627545ef13855e295c33862386c8ec03fcdb54f8.scope - libcontainer container a84910ea40f5973d6e125fe1627545ef13855e295c33862386c8ec03fcdb54f8. Sep 9 00:36:36.643739 containerd[1438]: time="2025-09-09T00:36:36.643706568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xjqzr,Uid:fca2a0af-5236-49b6-aa25-63a3744b11e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a84910ea40f5973d6e125fe1627545ef13855e295c33862386c8ec03fcdb54f8\"" Sep 9 00:36:36.644894 kubelet[2465]: E0909 00:36:36.644526 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:36.649850 containerd[1438]: time="2025-09-09T00:36:36.649817818Z" level=info msg="CreateContainer within sandbox \"a84910ea40f5973d6e125fe1627545ef13855e295c33862386c8ec03fcdb54f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:36:36.667012 containerd[1438]: time="2025-09-09T00:36:36.666949621Z" level=info msg="CreateContainer within sandbox \"a84910ea40f5973d6e125fe1627545ef13855e295c33862386c8ec03fcdb54f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ea8f74ae89598ebe8546932eecfe8b6ee70cb0212b18975612effc8f77e31db\"" Sep 9 00:36:36.667703 containerd[1438]: time="2025-09-09T00:36:36.667503195Z" level=info msg="StartContainer for \"8ea8f74ae89598ebe8546932eecfe8b6ee70cb0212b18975612effc8f77e31db\"" Sep 9 00:36:36.689689 systemd[1]: Started cri-containerd-8ea8f74ae89598ebe8546932eecfe8b6ee70cb0212b18975612effc8f77e31db.scope - libcontainer container 8ea8f74ae89598ebe8546932eecfe8b6ee70cb0212b18975612effc8f77e31db. Sep 9 00:36:36.711893 containerd[1438]: time="2025-09-09T00:36:36.711852790Z" level=info msg="StartContainer for \"8ea8f74ae89598ebe8546932eecfe8b6ee70cb0212b18975612effc8f77e31db\" returns successfully" Sep 9 00:36:36.802404 containerd[1438]: time="2025-09-09T00:36:36.802357019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rmr9p,Uid:aaffe4cd-f84a-48de-8e1e-80ff4434a53c,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:36:36.825617 containerd[1438]: time="2025-09-09T00:36:36.824823716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:36.825617 containerd[1438]: time="2025-09-09T00:36:36.824896496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:36.825617 containerd[1438]: time="2025-09-09T00:36:36.824912188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:36.827105 containerd[1438]: time="2025-09-09T00:36:36.827043135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:36.829298 kubelet[2465]: E0909 00:36:36.829173 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:36.853716 systemd[1]: Started cri-containerd-95f555faee66e6225365f3cf412624c84c3cdd7ac1ba93ccd97d78354704ab95.scope - libcontainer container 95f555faee66e6225365f3cf412624c84c3cdd7ac1ba93ccd97d78354704ab95. Sep 9 00:36:36.895700 containerd[1438]: time="2025-09-09T00:36:36.895637404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rmr9p,Uid:aaffe4cd-f84a-48de-8e1e-80ff4434a53c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"95f555faee66e6225365f3cf412624c84c3cdd7ac1ba93ccd97d78354704ab95\"" Sep 9 00:36:36.897396 containerd[1438]: time="2025-09-09T00:36:36.897141237Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:36:37.300191 kubelet[2465]: E0909 00:36:37.299303 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:37.300191 kubelet[2465]: E0909 00:36:37.299377 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:37.309784 kubelet[2465]: I0909 00:36:37.309530 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xjqzr" podStartSLOduration=1.309490805 podStartE2EDuration="1.309490805s" podCreationTimestamp="2025-09-09 00:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:37.309222116 +0000 UTC m=+9.132421560" watchObservedRunningTime="2025-09-09 00:36:37.309490805 +0000 UTC m=+9.132690289" Sep 9 00:36:38.460171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999527429.mount: Deactivated successfully. Sep 9 00:36:40.615558 containerd[1438]: time="2025-09-09T00:36:40.613217452Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:40.617030 containerd[1438]: time="2025-09-09T00:36:40.616994102Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 00:36:40.617786 containerd[1438]: time="2025-09-09T00:36:40.617742515Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:40.620119 containerd[1438]: time="2025-09-09T00:36:40.619824047Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:40.620764 containerd[1438]: time="2025-09-09T00:36:40.620658357Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.723394982s" Sep 9 00:36:40.620764 containerd[1438]: time="2025-09-09T00:36:40.620688858Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 00:36:40.626437 containerd[1438]: time="2025-09-09T00:36:40.626404226Z" level=info msg="CreateContainer within sandbox \"95f555faee66e6225365f3cf412624c84c3cdd7ac1ba93ccd97d78354704ab95\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:36:40.638194 containerd[1438]: time="2025-09-09T00:36:40.638082645Z" level=info msg="CreateContainer within sandbox \"95f555faee66e6225365f3cf412624c84c3cdd7ac1ba93ccd97d78354704ab95\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"633e9c46f0bfbf63eeebaa6416bb25cbe3f241673a464a0dcfb9e74579f8bac3\"" Sep 9 00:36:40.639832 containerd[1438]: time="2025-09-09T00:36:40.639803780Z" level=info msg="StartContainer for \"633e9c46f0bfbf63eeebaa6416bb25cbe3f241673a464a0dcfb9e74579f8bac3\"" Sep 9 00:36:40.662851 systemd[1]: Started cri-containerd-633e9c46f0bfbf63eeebaa6416bb25cbe3f241673a464a0dcfb9e74579f8bac3.scope - libcontainer container 633e9c46f0bfbf63eeebaa6416bb25cbe3f241673a464a0dcfb9e74579f8bac3. Sep 9 00:36:40.685713 containerd[1438]: time="2025-09-09T00:36:40.684991773Z" level=info msg="StartContainer for \"633e9c46f0bfbf63eeebaa6416bb25cbe3f241673a464a0dcfb9e74579f8bac3\" returns successfully" Sep 9 00:36:41.322656 kubelet[2465]: I0909 00:36:41.322595 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-rmr9p" podStartSLOduration=1.596440312 podStartE2EDuration="5.322576507s" podCreationTimestamp="2025-09-09 00:36:36 +0000 UTC" firstStartedPulling="2025-09-09 00:36:36.896723054 +0000 UTC m=+8.719922538" lastFinishedPulling="2025-09-09 00:36:40.622859248 +0000 UTC m=+12.446058733" observedRunningTime="2025-09-09 00:36:41.322362853 +0000 UTC m=+13.145562337" watchObservedRunningTime="2025-09-09 00:36:41.322576507 +0000 UTC m=+13.145775991" Sep 9 00:36:41.336330 kubelet[2465]: E0909 00:36:41.336303 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:45.146216 update_engine[1415]: I20250909 00:36:45.145598 1415 update_attempter.cc:509] Updating boot flags... Sep 9 00:36:45.184579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2858) Sep 9 00:36:45.225600 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2857) Sep 9 00:36:46.053144 sudo[1609]: pam_unix(sudo:session): session closed for user root Sep 9 00:36:46.057982 sshd[1606]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:46.061633 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:55042.service: Deactivated successfully. Sep 9 00:36:46.063268 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:36:46.063436 systemd[1]: session-7.scope: Consumed 5.935s CPU time, 155.1M memory peak, 0B memory swap peak. Sep 9 00:36:46.064316 systemd-logind[1411]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:36:46.065271 systemd-logind[1411]: Removed session 7. Sep 9 00:36:49.703221 systemd[1]: Created slice kubepods-besteffort-pod3ad0f1f6_8f19_463e_b960_ad392ee04778.slice - libcontainer container kubepods-besteffort-pod3ad0f1f6_8f19_463e_b960_ad392ee04778.slice. Sep 9 00:36:49.793442 kubelet[2465]: I0909 00:36:49.793370 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad0f1f6-8f19-463e-b960-ad392ee04778-tigera-ca-bundle\") pod \"calico-typha-76d8f7b9b4-v9dhq\" (UID: \"3ad0f1f6-8f19-463e-b960-ad392ee04778\") " pod="calico-system/calico-typha-76d8f7b9b4-v9dhq" Sep 9 00:36:49.793442 kubelet[2465]: I0909 00:36:49.793425 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ad0f1f6-8f19-463e-b960-ad392ee04778-typha-certs\") pod \"calico-typha-76d8f7b9b4-v9dhq\" (UID: \"3ad0f1f6-8f19-463e-b960-ad392ee04778\") " pod="calico-system/calico-typha-76d8f7b9b4-v9dhq" Sep 9 00:36:49.793442 kubelet[2465]: I0909 00:36:49.793446 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhffl\" (UniqueName: \"kubernetes.io/projected/3ad0f1f6-8f19-463e-b960-ad392ee04778-kube-api-access-nhffl\") pod \"calico-typha-76d8f7b9b4-v9dhq\" (UID: \"3ad0f1f6-8f19-463e-b960-ad392ee04778\") " pod="calico-system/calico-typha-76d8f7b9b4-v9dhq" Sep 9 00:36:50.020232 kubelet[2465]: E0909 00:36:50.020191 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:50.021998 containerd[1438]: time="2025-09-09T00:36:50.021563331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d8f7b9b4-v9dhq,Uid:3ad0f1f6-8f19-463e-b960-ad392ee04778,Namespace:calico-system,Attempt:0,}" Sep 9 00:36:50.054387 containerd[1438]: time="2025-09-09T00:36:50.053924282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:50.054387 containerd[1438]: time="2025-09-09T00:36:50.053995310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:50.054387 containerd[1438]: time="2025-09-09T00:36:50.054010436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:50.054387 containerd[1438]: time="2025-09-09T00:36:50.054371701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:50.070742 systemd[1]: Created slice kubepods-besteffort-podcd424043_83cc_4392_b67a_11b8c52917db.slice - libcontainer container kubepods-besteffort-podcd424043_83cc_4392_b67a_11b8c52917db.slice. Sep 9 00:36:50.088725 systemd[1]: Started cri-containerd-8d99e07b9dee89efc27d626269b6c3f1e9d60fdd3a601079aae60b89ce75578c.scope - libcontainer container 8d99e07b9dee89efc27d626269b6c3f1e9d60fdd3a601079aae60b89ce75578c. Sep 9 00:36:50.096953 kubelet[2465]: I0909 00:36:50.096907 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-cni-log-dir\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.096953 kubelet[2465]: I0909 00:36:50.096955 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-flexvol-driver-host\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097105 kubelet[2465]: I0909 00:36:50.097015 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-lib-modules\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097105 kubelet[2465]: I0909 00:36:50.097058 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-policysync\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097105 kubelet[2465]: I0909 00:36:50.097078 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-xtables-lock\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097105 kubelet[2465]: I0909 00:36:50.097097 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-cni-net-dir\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097208 kubelet[2465]: I0909 00:36:50.097122 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-cni-bin-dir\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097208 kubelet[2465]: I0909 00:36:50.097145 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd424043-83cc-4392-b67a-11b8c52917db-tigera-ca-bundle\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097403 kubelet[2465]: I0909 00:36:50.097381 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-var-lib-calico\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097442 kubelet[2465]: I0909 00:36:50.097414 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cd424043-83cc-4392-b67a-11b8c52917db-node-certs\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097442 kubelet[2465]: I0909 00:36:50.097430 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cd424043-83cc-4392-b67a-11b8c52917db-var-run-calico\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.097494 kubelet[2465]: I0909 00:36:50.097446 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frhxp\" (UniqueName: \"kubernetes.io/projected/cd424043-83cc-4392-b67a-11b8c52917db-kube-api-access-frhxp\") pod \"calico-node-jbvp9\" (UID: \"cd424043-83cc-4392-b67a-11b8c52917db\") " pod="calico-system/calico-node-jbvp9" Sep 9 00:36:50.124802 containerd[1438]: time="2025-09-09T00:36:50.124666472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d8f7b9b4-v9dhq,Uid:3ad0f1f6-8f19-463e-b960-ad392ee04778,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d99e07b9dee89efc27d626269b6c3f1e9d60fdd3a601079aae60b89ce75578c\"" Sep 9 00:36:50.126997 kubelet[2465]: E0909 00:36:50.126973 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:50.128590 containerd[1438]: time="2025-09-09T00:36:50.128362031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:36:50.199956 kubelet[2465]: E0909 00:36:50.199894 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.199956 kubelet[2465]: W0909 00:36:50.199937 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.203864 kubelet[2465]: E0909 00:36:50.203823 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.204115 kubelet[2465]: E0909 00:36:50.204099 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.204115 kubelet[2465]: W0909 00:36:50.204113 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.204182 kubelet[2465]: E0909 00:36:50.204125 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.212745 kubelet[2465]: E0909 00:36:50.212714 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.212745 kubelet[2465]: W0909 00:36:50.212734 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.212745 kubelet[2465]: E0909 00:36:50.212749 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.355602 kubelet[2465]: E0909 00:36:50.354570 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bczn" podUID="5b22ce9a-63bb-41fc-bc0b-217e50145ea4" Sep 9 00:36:50.383332 kubelet[2465]: E0909 00:36:50.383305 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.383332 kubelet[2465]: W0909 00:36:50.383326 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.383475 kubelet[2465]: E0909 00:36:50.383346 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.383524 kubelet[2465]: E0909 00:36:50.383511 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.387942 kubelet[2465]: W0909 00:36:50.383521 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.387942 kubelet[2465]: E0909 00:36:50.387773 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389464 kubelet[2465]: E0909 00:36:50.388467 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389464 kubelet[2465]: W0909 00:36:50.388480 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389464 kubelet[2465]: E0909 00:36:50.388493 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389464 kubelet[2465]: E0909 00:36:50.388676 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389464 kubelet[2465]: W0909 00:36:50.388683 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389464 kubelet[2465]: E0909 00:36:50.388691 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389464 kubelet[2465]: E0909 00:36:50.388833 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389464 kubelet[2465]: W0909 00:36:50.388839 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389464 kubelet[2465]: E0909 00:36:50.388846 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389464 kubelet[2465]: E0909 00:36:50.388970 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389733 kubelet[2465]: W0909 00:36:50.388976 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389733 kubelet[2465]: E0909 00:36:50.388982 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389733 kubelet[2465]: E0909 00:36:50.389101 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389733 kubelet[2465]: W0909 00:36:50.389108 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389733 kubelet[2465]: E0909 00:36:50.389114 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389733 kubelet[2465]: E0909 00:36:50.389255 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389733 kubelet[2465]: W0909 00:36:50.389262 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389733 kubelet[2465]: E0909 00:36:50.389269 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389733 kubelet[2465]: E0909 00:36:50.389447 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389733 kubelet[2465]: W0909 00:36:50.389456 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389920 kubelet[2465]: E0909 00:36:50.389463 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389920 kubelet[2465]: E0909 00:36:50.389607 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389920 kubelet[2465]: W0909 00:36:50.389614 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389920 kubelet[2465]: E0909 00:36:50.389621 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389920 kubelet[2465]: E0909 00:36:50.389746 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389920 kubelet[2465]: W0909 00:36:50.389753 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389920 kubelet[2465]: E0909 00:36:50.389759 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.389920 kubelet[2465]: E0909 00:36:50.389882 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.389920 kubelet[2465]: W0909 00:36:50.389889 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.389920 kubelet[2465]: E0909 00:36:50.389895 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.390110 kubelet[2465]: E0909 00:36:50.390028 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.390110 kubelet[2465]: W0909 00:36:50.390035 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.390110 kubelet[2465]: E0909 00:36:50.390042 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.390171 kubelet[2465]: E0909 00:36:50.390161 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.390171 kubelet[2465]: W0909 00:36:50.390168 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.390211 kubelet[2465]: E0909 00:36:50.390174 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.391377 kubelet[2465]: E0909 00:36:50.390338 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.391377 kubelet[2465]: W0909 00:36:50.390350 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.391377 kubelet[2465]: E0909 00:36:50.390359 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.391377 kubelet[2465]: E0909 00:36:50.390570 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.391377 kubelet[2465]: W0909 00:36:50.390578 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.391377 kubelet[2465]: E0909 00:36:50.390587 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.391377 kubelet[2465]: E0909 00:36:50.390742 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.391377 kubelet[2465]: W0909 00:36:50.390750 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.391377 kubelet[2465]: E0909 00:36:50.390759 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.391377 kubelet[2465]: E0909 00:36:50.390882 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.391635 kubelet[2465]: W0909 00:36:50.390888 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.391635 kubelet[2465]: E0909 00:36:50.390895 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.391635 kubelet[2465]: E0909 00:36:50.391016 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.391635 kubelet[2465]: W0909 00:36:50.391022 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.391635 kubelet[2465]: E0909 00:36:50.391029 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.391635 kubelet[2465]: E0909 00:36:50.391146 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.391635 kubelet[2465]: W0909 00:36:50.391152 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.391635 kubelet[2465]: E0909 00:36:50.391160 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.393468 containerd[1438]: time="2025-09-09T00:36:50.392946594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jbvp9,Uid:cd424043-83cc-4392-b67a-11b8c52917db,Namespace:calico-system,Attempt:0,}" Sep 9 00:36:50.399708 kubelet[2465]: E0909 00:36:50.399508 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.399708 kubelet[2465]: W0909 00:36:50.399571 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.399708 kubelet[2465]: E0909 00:36:50.399586 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.399708 kubelet[2465]: I0909 00:36:50.399608 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b22ce9a-63bb-41fc-bc0b-217e50145ea4-kubelet-dir\") pod \"csi-node-driver-5bczn\" (UID: \"5b22ce9a-63bb-41fc-bc0b-217e50145ea4\") " pod="calico-system/csi-node-driver-5bczn" Sep 9 00:36:50.399990 kubelet[2465]: E0909 00:36:50.399815 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.399990 kubelet[2465]: W0909 00:36:50.399826 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.399990 kubelet[2465]: E0909 00:36:50.399835 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.399990 kubelet[2465]: I0909 00:36:50.399856 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ccl5\" (UniqueName: \"kubernetes.io/projected/5b22ce9a-63bb-41fc-bc0b-217e50145ea4-kube-api-access-8ccl5\") pod \"csi-node-driver-5bczn\" (UID: \"5b22ce9a-63bb-41fc-bc0b-217e50145ea4\") " pod="calico-system/csi-node-driver-5bczn" Sep 9 00:36:50.400462 kubelet[2465]: E0909 00:36:50.400066 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.400462 kubelet[2465]: W0909 00:36:50.400079 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.400462 kubelet[2465]: E0909 00:36:50.400091 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.400462 kubelet[2465]: E0909 00:36:50.400263 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.400462 kubelet[2465]: W0909 00:36:50.400273 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.400462 kubelet[2465]: E0909 00:36:50.400293 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.401101 kubelet[2465]: E0909 00:36:50.401017 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.401101 kubelet[2465]: W0909 00:36:50.401030 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.401101 kubelet[2465]: E0909 00:36:50.401042 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.401101 kubelet[2465]: I0909 00:36:50.401066 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5b22ce9a-63bb-41fc-bc0b-217e50145ea4-registration-dir\") pod \"csi-node-driver-5bczn\" (UID: \"5b22ce9a-63bb-41fc-bc0b-217e50145ea4\") " pod="calico-system/csi-node-driver-5bczn" Sep 9 00:36:50.401325 kubelet[2465]: E0909 00:36:50.401303 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.401325 kubelet[2465]: W0909 00:36:50.401321 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.401372 kubelet[2465]: E0909 00:36:50.401334 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.401566 kubelet[2465]: E0909 00:36:50.401531 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.401566 kubelet[2465]: W0909 00:36:50.401565 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.401633 kubelet[2465]: E0909 00:36:50.401576 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.401775 kubelet[2465]: E0909 00:36:50.401761 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.401775 kubelet[2465]: W0909 00:36:50.401773 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.401828 kubelet[2465]: E0909 00:36:50.401782 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.401828 kubelet[2465]: I0909 00:36:50.401810 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5b22ce9a-63bb-41fc-bc0b-217e50145ea4-varrun\") pod \"csi-node-driver-5bczn\" (UID: \"5b22ce9a-63bb-41fc-bc0b-217e50145ea4\") " pod="calico-system/csi-node-driver-5bczn" Sep 9 00:36:50.402008 kubelet[2465]: E0909 00:36:50.401994 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.402040 kubelet[2465]: W0909 00:36:50.402008 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.402040 kubelet[2465]: E0909 00:36:50.402017 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.402187 kubelet[2465]: I0909 00:36:50.402128 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5b22ce9a-63bb-41fc-bc0b-217e50145ea4-socket-dir\") pod \"csi-node-driver-5bczn\" (UID: \"5b22ce9a-63bb-41fc-bc0b-217e50145ea4\") " pod="calico-system/csi-node-driver-5bczn" Sep 9 00:36:50.402290 kubelet[2465]: E0909 00:36:50.402269 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.402290 kubelet[2465]: W0909 00:36:50.402287 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.402365 kubelet[2465]: E0909 00:36:50.402297 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.402488 kubelet[2465]: E0909 00:36:50.402469 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.402488 kubelet[2465]: W0909 00:36:50.402480 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.402488 kubelet[2465]: E0909 00:36:50.402488 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.402735 kubelet[2465]: E0909 00:36:50.402723 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.402762 kubelet[2465]: W0909 00:36:50.402754 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.402798 kubelet[2465]: E0909 00:36:50.402765 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.406945 kubelet[2465]: E0909 00:36:50.406922 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.406945 kubelet[2465]: W0909 00:36:50.406939 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.407030 kubelet[2465]: E0909 00:36:50.406952 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.408260 kubelet[2465]: E0909 00:36:50.407127 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.408260 kubelet[2465]: W0909 00:36:50.407136 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.408260 kubelet[2465]: E0909 00:36:50.407144 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.408260 kubelet[2465]: E0909 00:36:50.407314 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.408260 kubelet[2465]: W0909 00:36:50.407321 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.408260 kubelet[2465]: E0909 00:36:50.407328 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.423370 containerd[1438]: time="2025-09-09T00:36:50.422912506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:50.423370 containerd[1438]: time="2025-09-09T00:36:50.423155923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:50.423370 containerd[1438]: time="2025-09-09T00:36:50.423168808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:50.423370 containerd[1438]: time="2025-09-09T00:36:50.423277172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:50.453883 systemd[1]: Started cri-containerd-01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679.scope - libcontainer container 01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679. Sep 9 00:36:50.479670 containerd[1438]: time="2025-09-09T00:36:50.479533925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jbvp9,Uid:cd424043-83cc-4392-b67a-11b8c52917db,Namespace:calico-system,Attempt:0,} returns sandbox id \"01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679\"" Sep 9 00:36:50.503456 kubelet[2465]: E0909 00:36:50.503307 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.503456 kubelet[2465]: W0909 00:36:50.503330 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.503456 kubelet[2465]: E0909 00:36:50.503348 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.503727 kubelet[2465]: E0909 00:36:50.503714 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.503784 kubelet[2465]: W0909 00:36:50.503773 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.503915 kubelet[2465]: E0909 00:36:50.503828 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.504179 kubelet[2465]: E0909 00:36:50.504139 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.504179 kubelet[2465]: W0909 00:36:50.504154 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.504179 kubelet[2465]: E0909 00:36:50.504165 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.504581 kubelet[2465]: E0909 00:36:50.504508 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.504581 kubelet[2465]: W0909 00:36:50.504523 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.504581 kubelet[2465]: E0909 00:36:50.504533 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.504826 kubelet[2465]: E0909 00:36:50.504788 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.504826 kubelet[2465]: W0909 00:36:50.504811 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.504826 kubelet[2465]: E0909 00:36:50.504825 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.504999 kubelet[2465]: E0909 00:36:50.504987 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.504999 kubelet[2465]: W0909 00:36:50.504998 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.505057 kubelet[2465]: E0909 00:36:50.505006 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.505169 kubelet[2465]: E0909 00:36:50.505157 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.505169 kubelet[2465]: W0909 00:36:50.505167 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.505221 kubelet[2465]: E0909 00:36:50.505175 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.505333 kubelet[2465]: E0909 00:36:50.505320 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.505333 kubelet[2465]: W0909 00:36:50.505331 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.505394 kubelet[2465]: E0909 00:36:50.505339 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.505494 kubelet[2465]: E0909 00:36:50.505479 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.505494 kubelet[2465]: W0909 00:36:50.505490 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.505570 kubelet[2465]: E0909 00:36:50.505500 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.505661 kubelet[2465]: E0909 00:36:50.505646 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.505661 kubelet[2465]: W0909 00:36:50.505660 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.505722 kubelet[2465]: E0909 00:36:50.505668 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.505874 kubelet[2465]: E0909 00:36:50.505863 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.505874 kubelet[2465]: W0909 00:36:50.505874 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.505930 kubelet[2465]: E0909 00:36:50.505882 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.506201 kubelet[2465]: E0909 00:36:50.506189 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.506201 kubelet[2465]: W0909 00:36:50.506201 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.506269 kubelet[2465]: E0909 00:36:50.506211 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.506438 kubelet[2465]: E0909 00:36:50.506423 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.506438 kubelet[2465]: W0909 00:36:50.506434 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.506494 kubelet[2465]: E0909 00:36:50.506444 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.506619 kubelet[2465]: E0909 00:36:50.506605 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.506658 kubelet[2465]: W0909 00:36:50.506623 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.506658 kubelet[2465]: E0909 00:36:50.506631 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.506806 kubelet[2465]: E0909 00:36:50.506795 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.506806 kubelet[2465]: W0909 00:36:50.506806 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.506855 kubelet[2465]: E0909 00:36:50.506815 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.507081 kubelet[2465]: E0909 00:36:50.507069 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.507081 kubelet[2465]: W0909 00:36:50.507082 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.507144 kubelet[2465]: E0909 00:36:50.507091 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.507268 kubelet[2465]: E0909 00:36:50.507258 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.507268 kubelet[2465]: W0909 00:36:50.507268 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.507327 kubelet[2465]: E0909 00:36:50.507286 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.507483 kubelet[2465]: E0909 00:36:50.507471 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.507483 kubelet[2465]: W0909 00:36:50.507482 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.507543 kubelet[2465]: E0909 00:36:50.507490 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.507800 kubelet[2465]: E0909 00:36:50.507790 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.507829 kubelet[2465]: W0909 00:36:50.507800 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.507829 kubelet[2465]: E0909 00:36:50.507808 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.508176 kubelet[2465]: E0909 00:36:50.508152 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.508176 kubelet[2465]: W0909 00:36:50.508162 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.508176 kubelet[2465]: E0909 00:36:50.508170 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.508375 kubelet[2465]: E0909 00:36:50.508362 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.508375 kubelet[2465]: W0909 00:36:50.508373 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.508433 kubelet[2465]: E0909 00:36:50.508381 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.508672 kubelet[2465]: E0909 00:36:50.508656 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.508672 kubelet[2465]: W0909 00:36:50.508670 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.508744 kubelet[2465]: E0909 00:36:50.508678 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.508934 kubelet[2465]: E0909 00:36:50.508920 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.508934 kubelet[2465]: W0909 00:36:50.508933 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.508990 kubelet[2465]: E0909 00:36:50.508942 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.509226 kubelet[2465]: E0909 00:36:50.509211 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.509226 kubelet[2465]: W0909 00:36:50.509223 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.509310 kubelet[2465]: E0909 00:36:50.509232 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.509489 kubelet[2465]: E0909 00:36:50.509464 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.509489 kubelet[2465]: W0909 00:36:50.509488 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.509565 kubelet[2465]: E0909 00:36:50.509496 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:50.516483 kubelet[2465]: E0909 00:36:50.516463 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:50.516483 kubelet[2465]: W0909 00:36:50.516479 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:50.516483 kubelet[2465]: E0909 00:36:50.516491 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:51.194923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180066346.mount: Deactivated successfully. Sep 9 00:36:51.702671 containerd[1438]: time="2025-09-09T00:36:51.702622812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:51.703701 containerd[1438]: time="2025-09-09T00:36:51.703333124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 00:36:51.704482 containerd[1438]: time="2025-09-09T00:36:51.704431784Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:51.706924 containerd[1438]: time="2025-09-09T00:36:51.706856751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:51.707564 containerd[1438]: time="2025-09-09T00:36:51.707471426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.579078784s" Sep 9 00:36:51.707564 containerd[1438]: time="2025-09-09T00:36:51.707513042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 00:36:51.708949 containerd[1438]: time="2025-09-09T00:36:51.708911697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:36:51.723960 containerd[1438]: time="2025-09-09T00:36:51.723923676Z" level=info msg="CreateContainer within sandbox \"8d99e07b9dee89efc27d626269b6c3f1e9d60fdd3a601079aae60b89ce75578c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:36:51.737635 containerd[1438]: time="2025-09-09T00:36:51.737587580Z" level=info msg="CreateContainer within sandbox \"8d99e07b9dee89efc27d626269b6c3f1e9d60fdd3a601079aae60b89ce75578c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4f09a7ff43c85a96002e5cad141649142a5cb605a047b7a1043770378c0a75f4\"" Sep 9 00:36:51.738278 containerd[1438]: time="2025-09-09T00:36:51.738249353Z" level=info msg="StartContainer for \"4f09a7ff43c85a96002e5cad141649142a5cb605a047b7a1043770378c0a75f4\"" Sep 9 00:36:51.780584 systemd[1]: Started cri-containerd-4f09a7ff43c85a96002e5cad141649142a5cb605a047b7a1043770378c0a75f4.scope - libcontainer container 4f09a7ff43c85a96002e5cad141649142a5cb605a047b7a1043770378c0a75f4. Sep 9 00:36:51.876457 containerd[1438]: time="2025-09-09T00:36:51.876404093Z" level=info msg="StartContainer for \"4f09a7ff43c85a96002e5cad141649142a5cb605a047b7a1043770378c0a75f4\" returns successfully" Sep 9 00:36:52.268080 kubelet[2465]: E0909 00:36:52.267882 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bczn" podUID="5b22ce9a-63bb-41fc-bc0b-217e50145ea4" Sep 9 00:36:52.355981 kubelet[2465]: E0909 00:36:52.355923 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:52.401408 kubelet[2465]: I0909 00:36:52.401269 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76d8f7b9b4-v9dhq" podStartSLOduration=1.820483135 podStartE2EDuration="3.401255579s" podCreationTimestamp="2025-09-09 00:36:49 +0000 UTC" firstStartedPulling="2025-09-09 00:36:50.12766171 +0000 UTC m=+21.950861194" lastFinishedPulling="2025-09-09 00:36:51.708434194 +0000 UTC m=+23.531633638" observedRunningTime="2025-09-09 00:36:52.401059307 +0000 UTC m=+24.224258791" watchObservedRunningTime="2025-09-09 00:36:52.401255579 +0000 UTC m=+24.224455063" Sep 9 00:36:52.402595 kubelet[2465]: E0909 00:36:52.402408 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.402595 kubelet[2465]: W0909 00:36:52.402431 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.402595 kubelet[2465]: E0909 00:36:52.402450 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.402912 kubelet[2465]: E0909 00:36:52.402670 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.402912 kubelet[2465]: W0909 00:36:52.402679 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.402912 kubelet[2465]: E0909 00:36:52.402721 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.402912 kubelet[2465]: E0909 00:36:52.402907 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.402912 kubelet[2465]: W0909 00:36:52.402916 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.402912 kubelet[2465]: E0909 00:36:52.402926 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.403808 kubelet[2465]: E0909 00:36:52.403770 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.403808 kubelet[2465]: W0909 00:36:52.403789 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.403808 kubelet[2465]: E0909 00:36:52.403802 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.404629 kubelet[2465]: E0909 00:36:52.404046 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.404629 kubelet[2465]: W0909 00:36:52.404055 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.404629 kubelet[2465]: E0909 00:36:52.404064 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.404629 kubelet[2465]: E0909 00:36:52.404265 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.404629 kubelet[2465]: W0909 00:36:52.404273 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.404629 kubelet[2465]: E0909 00:36:52.404281 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.404629 kubelet[2465]: E0909 00:36:52.404428 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.404629 kubelet[2465]: W0909 00:36:52.404436 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.404629 kubelet[2465]: E0909 00:36:52.404444 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.408476 kubelet[2465]: E0909 00:36:52.404645 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.408476 kubelet[2465]: W0909 00:36:52.404656 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.408476 kubelet[2465]: E0909 00:36:52.404665 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.408476 kubelet[2465]: E0909 00:36:52.404843 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.408476 kubelet[2465]: W0909 00:36:52.404851 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.408476 kubelet[2465]: E0909 00:36:52.404860 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.408476 kubelet[2465]: E0909 00:36:52.405025 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.408476 kubelet[2465]: W0909 00:36:52.405033 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.408476 kubelet[2465]: E0909 00:36:52.405041 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.408476 kubelet[2465]: E0909 00:36:52.405218 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.409388 kubelet[2465]: W0909 00:36:52.405226 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.409388 kubelet[2465]: E0909 00:36:52.405235 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.409388 kubelet[2465]: E0909 00:36:52.405434 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.409388 kubelet[2465]: W0909 00:36:52.405444 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.409388 kubelet[2465]: E0909 00:36:52.405452 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.409388 kubelet[2465]: E0909 00:36:52.405629 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.409388 kubelet[2465]: W0909 00:36:52.405636 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.409388 kubelet[2465]: E0909 00:36:52.405645 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.409388 kubelet[2465]: E0909 00:36:52.405814 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.409388 kubelet[2465]: W0909 00:36:52.405820 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.410299 kubelet[2465]: E0909 00:36:52.405827 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.410299 kubelet[2465]: E0909 00:36:52.405952 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.410299 kubelet[2465]: W0909 00:36:52.405959 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.410299 kubelet[2465]: E0909 00:36:52.405966 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.419459 kubelet[2465]: E0909 00:36:52.419429 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.419459 kubelet[2465]: W0909 00:36:52.419446 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.419459 kubelet[2465]: E0909 00:36:52.419460 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.419728 kubelet[2465]: E0909 00:36:52.419700 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.419728 kubelet[2465]: W0909 00:36:52.419713 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.419728 kubelet[2465]: E0909 00:36:52.419724 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.420019 kubelet[2465]: E0909 00:36:52.419905 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.420019 kubelet[2465]: W0909 00:36:52.419913 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.420019 kubelet[2465]: E0909 00:36:52.419922 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.420019 kubelet[2465]: E0909 00:36:52.420211 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.420019 kubelet[2465]: W0909 00:36:52.420229 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.420019 kubelet[2465]: E0909 00:36:52.420244 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.420592 kubelet[2465]: E0909 00:36:52.420439 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.420592 kubelet[2465]: W0909 00:36:52.420446 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.420592 kubelet[2465]: E0909 00:36:52.420455 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.420686 kubelet[2465]: E0909 00:36:52.420637 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.420686 kubelet[2465]: W0909 00:36:52.420645 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.420686 kubelet[2465]: E0909 00:36:52.420653 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.420832 kubelet[2465]: E0909 00:36:52.420822 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.420832 kubelet[2465]: W0909 00:36:52.420832 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.420885 kubelet[2465]: E0909 00:36:52.420840 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.421621 kubelet[2465]: E0909 00:36:52.421602 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.421621 kubelet[2465]: W0909 00:36:52.421619 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.421714 kubelet[2465]: E0909 00:36:52.421630 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.421827 kubelet[2465]: E0909 00:36:52.421806 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.421827 kubelet[2465]: W0909 00:36:52.421818 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.421827 kubelet[2465]: E0909 00:36:52.421827 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.422035 kubelet[2465]: E0909 00:36:52.421991 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.422035 kubelet[2465]: W0909 00:36:52.422006 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.422035 kubelet[2465]: E0909 00:36:52.422016 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.422479 kubelet[2465]: E0909 00:36:52.422271 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.422479 kubelet[2465]: W0909 00:36:52.422281 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.422479 kubelet[2465]: E0909 00:36:52.422291 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.422479 kubelet[2465]: E0909 00:36:52.422442 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.422479 kubelet[2465]: W0909 00:36:52.422449 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.422479 kubelet[2465]: E0909 00:36:52.422461 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.422690 kubelet[2465]: E0909 00:36:52.422671 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.422690 kubelet[2465]: W0909 00:36:52.422679 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.422690 kubelet[2465]: E0909 00:36:52.422687 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.423505 kubelet[2465]: E0909 00:36:52.423011 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.423505 kubelet[2465]: W0909 00:36:52.423026 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.423505 kubelet[2465]: E0909 00:36:52.423037 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.423505 kubelet[2465]: E0909 00:36:52.423189 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.423505 kubelet[2465]: W0909 00:36:52.423197 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.423505 kubelet[2465]: E0909 00:36:52.423205 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.423505 kubelet[2465]: E0909 00:36:52.423357 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.423505 kubelet[2465]: W0909 00:36:52.423364 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.423505 kubelet[2465]: E0909 00:36:52.423370 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.423505 kubelet[2465]: E0909 00:36:52.423519 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.423818 kubelet[2465]: W0909 00:36:52.423527 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.423818 kubelet[2465]: E0909 00:36:52.423534 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:52.423867 kubelet[2465]: E0909 00:36:52.423834 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:36:52.423867 kubelet[2465]: W0909 00:36:52.423843 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:36:52.423867 kubelet[2465]: E0909 00:36:52.423851 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:36:53.011666 containerd[1438]: time="2025-09-09T00:36:53.011603731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:53.012320 containerd[1438]: time="2025-09-09T00:36:53.012290491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 00:36:53.013071 containerd[1438]: time="2025-09-09T00:36:53.013044475Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:53.015375 containerd[1438]: time="2025-09-09T00:36:53.015335796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:53.016239 containerd[1438]: time="2025-09-09T00:36:53.016191536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.307240865s" Sep 9 00:36:53.016239 containerd[1438]: time="2025-09-09T00:36:53.016236871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 00:36:53.020142 containerd[1438]: time="2025-09-09T00:36:53.020110187Z" level=info msg="CreateContainer within sandbox \"01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:36:53.033100 containerd[1438]: time="2025-09-09T00:36:53.033059637Z" level=info msg="CreateContainer within sandbox \"01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354\"" Sep 9 00:36:53.034217 containerd[1438]: time="2025-09-09T00:36:53.033631717Z" level=info msg="StartContainer for \"074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354\"" Sep 9 00:36:53.068788 systemd[1]: Started cri-containerd-074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354.scope - libcontainer container 074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354. Sep 9 00:36:53.103277 containerd[1438]: time="2025-09-09T00:36:53.103217223Z" level=info msg="StartContainer for \"074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354\" returns successfully" Sep 9 00:36:53.107738 systemd[1]: cri-containerd-074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354.scope: Deactivated successfully. Sep 9 00:36:53.143612 containerd[1438]: time="2025-09-09T00:36:53.137769112Z" level=info msg="shim disconnected" id=074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354 namespace=k8s.io Sep 9 00:36:53.143612 containerd[1438]: time="2025-09-09T00:36:53.143598031Z" level=warning msg="cleaning up after shim disconnected" id=074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354 namespace=k8s.io Sep 9 00:36:53.143612 containerd[1438]: time="2025-09-09T00:36:53.143615197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:36:53.365174 kubelet[2465]: I0909 00:36:53.365068 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:36:53.367295 kubelet[2465]: E0909 00:36:53.365590 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:53.374552 containerd[1438]: time="2025-09-09T00:36:53.374405783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:36:54.030926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-074c97ac633436adf09a746f13b80d75fef599660a20c28660188b19b3cc6354-rootfs.mount: Deactivated successfully. Sep 9 00:36:54.268187 kubelet[2465]: E0909 00:36:54.267358 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bczn" podUID="5b22ce9a-63bb-41fc-bc0b-217e50145ea4" Sep 9 00:36:56.164377 containerd[1438]: time="2025-09-09T00:36:56.163675926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:56.164815 containerd[1438]: time="2025-09-09T00:36:56.164528909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 00:36:56.166458 containerd[1438]: time="2025-09-09T00:36:56.166423454Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:56.178362 containerd[1438]: time="2025-09-09T00:36:56.178323164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:56.179281 containerd[1438]: time="2025-09-09T00:36:56.179228763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.804781485s" Sep 9 00:36:56.179281 containerd[1438]: time="2025-09-09T00:36:56.179276138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 00:36:56.185474 containerd[1438]: time="2025-09-09T00:36:56.185279669Z" level=info msg="CreateContainer within sandbox \"01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:36:56.204738 containerd[1438]: time="2025-09-09T00:36:56.204691456Z" level=info msg="CreateContainer within sandbox \"01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d\"" Sep 9 00:36:56.205682 containerd[1438]: time="2025-09-09T00:36:56.205463854Z" level=info msg="StartContainer for \"b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d\"" Sep 9 00:36:56.241711 systemd[1]: Started cri-containerd-b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d.scope - libcontainer container b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d. Sep 9 00:36:56.268353 kubelet[2465]: E0909 00:36:56.268279 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bczn" podUID="5b22ce9a-63bb-41fc-bc0b-217e50145ea4" Sep 9 00:36:56.269959 containerd[1438]: time="2025-09-09T00:36:56.269006090Z" level=info msg="StartContainer for \"b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d\" returns successfully" Sep 9 00:36:57.147746 systemd[1]: cri-containerd-b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d.scope: Deactivated successfully. Sep 9 00:36:57.179494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d-rootfs.mount: Deactivated successfully. Sep 9 00:36:57.207122 containerd[1438]: time="2025-09-09T00:36:57.207061095Z" level=info msg="shim disconnected" id=b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d namespace=k8s.io Sep 9 00:36:57.207122 containerd[1438]: time="2025-09-09T00:36:57.207118872Z" level=warning msg="cleaning up after shim disconnected" id=b92bc14d3ed8316790573af864890608dcebc6ed2802c3bf5ff28f6f72d0d21d namespace=k8s.io Sep 9 00:36:57.207122 containerd[1438]: time="2025-09-09T00:36:57.207127675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:36:57.222590 kubelet[2465]: I0909 00:36:57.222309 2465 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:36:57.272109 systemd[1]: Created slice kubepods-burstable-podaf9f3d40_f917_4ead_8678_9e2ee2432935.slice - libcontainer container kubepods-burstable-podaf9f3d40_f917_4ead_8678_9e2ee2432935.slice. Sep 9 00:36:57.280231 systemd[1]: Created slice kubepods-burstable-podf85e14c3_579a_4825_b5fe_c349bc6999fc.slice - libcontainer container kubepods-burstable-podf85e14c3_579a_4825_b5fe_c349bc6999fc.slice. Sep 9 00:36:57.287327 systemd[1]: Created slice kubepods-besteffort-podc5b7c42d_cee3_4af6_9b63_496fe2a31687.slice - libcontainer container kubepods-besteffort-podc5b7c42d_cee3_4af6_9b63_496fe2a31687.slice. Sep 9 00:36:57.293535 systemd[1]: Created slice kubepods-besteffort-podbbe52a64_481a_4444_8cd9_6a232570debc.slice - libcontainer container kubepods-besteffort-podbbe52a64_481a_4444_8cd9_6a232570debc.slice. Sep 9 00:36:57.307244 systemd[1]: Created slice kubepods-besteffort-poda376016b_b8d6_4e3c_a75d_da673ae09400.slice - libcontainer container kubepods-besteffort-poda376016b_b8d6_4e3c_a75d_da673ae09400.slice. Sep 9 00:36:57.311172 systemd[1]: Created slice kubepods-besteffort-pod4ddd1c7a_7179_4302_aedf_d6664fe29be7.slice - libcontainer container kubepods-besteffort-pod4ddd1c7a_7179_4302_aedf_d6664fe29be7.slice. Sep 9 00:36:57.317466 systemd[1]: Created slice kubepods-besteffort-podf592feec_dcfc_407a_bf1e_672a950349d6.slice - libcontainer container kubepods-besteffort-podf592feec_dcfc_407a_bf1e_672a950349d6.slice. Sep 9 00:36:57.360132 kubelet[2465]: I0909 00:36:57.357460 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4fq\" (UniqueName: \"kubernetes.io/projected/f85e14c3-579a-4825-b5fe-c349bc6999fc-kube-api-access-8g4fq\") pod \"coredns-674b8bbfcf-f48cr\" (UID: \"f85e14c3-579a-4825-b5fe-c349bc6999fc\") " pod="kube-system/coredns-674b8bbfcf-f48cr" Sep 9 00:36:57.360132 kubelet[2465]: I0909 00:36:57.360143 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6rh\" (UniqueName: \"kubernetes.io/projected/af9f3d40-f917-4ead-8678-9e2ee2432935-kube-api-access-xl6rh\") pod \"coredns-674b8bbfcf-dzzmz\" (UID: \"af9f3d40-f917-4ead-8678-9e2ee2432935\") " pod="kube-system/coredns-674b8bbfcf-dzzmz" Sep 9 00:36:57.360495 kubelet[2465]: I0909 00:36:57.360166 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f85e14c3-579a-4825-b5fe-c349bc6999fc-config-volume\") pod \"coredns-674b8bbfcf-f48cr\" (UID: \"f85e14c3-579a-4825-b5fe-c349bc6999fc\") " pod="kube-system/coredns-674b8bbfcf-f48cr" Sep 9 00:36:57.360495 kubelet[2465]: I0909 00:36:57.360185 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dmz\" (UniqueName: \"kubernetes.io/projected/c5b7c42d-cee3-4af6-9b63-496fe2a31687-kube-api-access-74dmz\") pod \"whisker-6f8ff9d7b9-lggcj\" (UID: \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\") " pod="calico-system/whisker-6f8ff9d7b9-lggcj" Sep 9 00:36:57.360495 kubelet[2465]: I0909 00:36:57.360200 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbe52a64-481a-4444-8cd9-6a232570debc-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-qpjc4\" (UID: \"bbe52a64-481a-4444-8cd9-6a232570debc\") " pod="calico-system/goldmane-54d579b49d-qpjc4" Sep 9 00:36:57.360495 kubelet[2465]: I0909 00:36:57.360214 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a376016b-b8d6-4e3c-a75d-da673ae09400-calico-apiserver-certs\") pod \"calico-apiserver-78d549548b-pkrjv\" (UID: \"a376016b-b8d6-4e3c-a75d-da673ae09400\") " pod="calico-apiserver/calico-apiserver-78d549548b-pkrjv" Sep 9 00:36:57.360495 kubelet[2465]: I0909 00:36:57.360233 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af9f3d40-f917-4ead-8678-9e2ee2432935-config-volume\") pod \"coredns-674b8bbfcf-dzzmz\" (UID: \"af9f3d40-f917-4ead-8678-9e2ee2432935\") " pod="kube-system/coredns-674b8bbfcf-dzzmz" Sep 9 00:36:57.360731 kubelet[2465]: I0909 00:36:57.360248 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlvz6\" (UniqueName: \"kubernetes.io/projected/4ddd1c7a-7179-4302-aedf-d6664fe29be7-kube-api-access-wlvz6\") pod \"calico-kube-controllers-7d4886c776-cnp84\" (UID: \"4ddd1c7a-7179-4302-aedf-d6664fe29be7\") " pod="calico-system/calico-kube-controllers-7d4886c776-cnp84" Sep 9 00:36:57.360731 kubelet[2465]: I0909 00:36:57.360267 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-ca-bundle\") pod \"whisker-6f8ff9d7b9-lggcj\" (UID: \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\") " pod="calico-system/whisker-6f8ff9d7b9-lggcj" Sep 9 00:36:57.360731 kubelet[2465]: I0909 00:36:57.360282 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f592feec-dcfc-407a-bf1e-672a950349d6-calico-apiserver-certs\") pod \"calico-apiserver-78d549548b-j9zf9\" (UID: \"f592feec-dcfc-407a-bf1e-672a950349d6\") " pod="calico-apiserver/calico-apiserver-78d549548b-j9zf9" Sep 9 00:36:57.360731 kubelet[2465]: I0909 00:36:57.360297 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8whl\" (UniqueName: \"kubernetes.io/projected/f592feec-dcfc-407a-bf1e-672a950349d6-kube-api-access-s8whl\") pod \"calico-apiserver-78d549548b-j9zf9\" (UID: \"f592feec-dcfc-407a-bf1e-672a950349d6\") " pod="calico-apiserver/calico-apiserver-78d549548b-j9zf9" Sep 9 00:36:57.360731 kubelet[2465]: I0909 00:36:57.360310 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bbe52a64-481a-4444-8cd9-6a232570debc-goldmane-key-pair\") pod \"goldmane-54d579b49d-qpjc4\" (UID: \"bbe52a64-481a-4444-8cd9-6a232570debc\") " pod="calico-system/goldmane-54d579b49d-qpjc4" Sep 9 00:36:57.360840 kubelet[2465]: I0909 00:36:57.360325 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplbv\" (UniqueName: \"kubernetes.io/projected/bbe52a64-481a-4444-8cd9-6a232570debc-kube-api-access-gplbv\") pod \"goldmane-54d579b49d-qpjc4\" (UID: \"bbe52a64-481a-4444-8cd9-6a232570debc\") " pod="calico-system/goldmane-54d579b49d-qpjc4" Sep 9 00:36:57.360840 kubelet[2465]: I0909 00:36:57.360349 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-backend-key-pair\") pod \"whisker-6f8ff9d7b9-lggcj\" (UID: \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\") " pod="calico-system/whisker-6f8ff9d7b9-lggcj" Sep 9 00:36:57.360840 kubelet[2465]: I0909 00:36:57.360363 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ddd1c7a-7179-4302-aedf-d6664fe29be7-tigera-ca-bundle\") pod \"calico-kube-controllers-7d4886c776-cnp84\" (UID: \"4ddd1c7a-7179-4302-aedf-d6664fe29be7\") " pod="calico-system/calico-kube-controllers-7d4886c776-cnp84" Sep 9 00:36:57.360840 kubelet[2465]: I0909 00:36:57.360376 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz49n\" (UniqueName: \"kubernetes.io/projected/a376016b-b8d6-4e3c-a75d-da673ae09400-kube-api-access-lz49n\") pod \"calico-apiserver-78d549548b-pkrjv\" (UID: \"a376016b-b8d6-4e3c-a75d-da673ae09400\") " pod="calico-apiserver/calico-apiserver-78d549548b-pkrjv" Sep 9 00:36:57.360840 kubelet[2465]: I0909 00:36:57.360398 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe52a64-481a-4444-8cd9-6a232570debc-config\") pod \"goldmane-54d579b49d-qpjc4\" (UID: \"bbe52a64-481a-4444-8cd9-6a232570debc\") " pod="calico-system/goldmane-54d579b49d-qpjc4" Sep 9 00:36:57.380867 containerd[1438]: time="2025-09-09T00:36:57.380832938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:36:57.576260 kubelet[2465]: E0909 00:36:57.576228 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:57.578421 containerd[1438]: time="2025-09-09T00:36:57.577777607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dzzmz,Uid:af9f3d40-f917-4ead-8678-9e2ee2432935,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:57.583203 kubelet[2465]: E0909 00:36:57.583129 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:57.583778 containerd[1438]: time="2025-09-09T00:36:57.583581446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f48cr,Uid:f85e14c3-579a-4825-b5fe-c349bc6999fc,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:57.591071 containerd[1438]: time="2025-09-09T00:36:57.591026852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8ff9d7b9-lggcj,Uid:c5b7c42d-cee3-4af6-9b63-496fe2a31687,Namespace:calico-system,Attempt:0,}" Sep 9 00:36:57.606058 containerd[1438]: time="2025-09-09T00:36:57.605987765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qpjc4,Uid:bbe52a64-481a-4444-8cd9-6a232570debc,Namespace:calico-system,Attempt:0,}" Sep 9 00:36:57.610439 containerd[1438]: time="2025-09-09T00:36:57.609990511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-pkrjv,Uid:a376016b-b8d6-4e3c-a75d-da673ae09400,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:36:57.615758 containerd[1438]: time="2025-09-09T00:36:57.615247108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4886c776-cnp84,Uid:4ddd1c7a-7179-4302-aedf-d6664fe29be7,Namespace:calico-system,Attempt:0,}" Sep 9 00:36:57.620560 containerd[1438]: time="2025-09-09T00:36:57.620212859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-j9zf9,Uid:f592feec-dcfc-407a-bf1e-672a950349d6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:36:57.726922 containerd[1438]: time="2025-09-09T00:36:57.726747062Z" level=error msg="Failed to destroy network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.726922 containerd[1438]: time="2025-09-09T00:36:57.726813402Z" level=error msg="Failed to destroy network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.727171 containerd[1438]: time="2025-09-09T00:36:57.727077000Z" level=error msg="encountered an error cleaning up failed sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.727171 containerd[1438]: time="2025-09-09T00:36:57.727126734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dzzmz,Uid:af9f3d40-f917-4ead-8678-9e2ee2432935,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.727370 kubelet[2465]: E0909 00:36:57.727319 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.727427 kubelet[2465]: E0909 00:36:57.727399 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dzzmz" Sep 9 00:36:57.727427 kubelet[2465]: E0909 00:36:57.727420 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dzzmz" Sep 9 00:36:57.727484 kubelet[2465]: E0909 00:36:57.727468 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dzzmz_kube-system(af9f3d40-f917-4ead-8678-9e2ee2432935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dzzmz_kube-system(af9f3d40-f917-4ead-8678-9e2ee2432935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dzzmz" podUID="af9f3d40-f917-4ead-8678-9e2ee2432935" Sep 9 00:36:57.727695 containerd[1438]: time="2025-09-09T00:36:57.727347200Z" level=error msg="encountered an error cleaning up failed sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.727695 containerd[1438]: time="2025-09-09T00:36:57.727645088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f48cr,Uid:f85e14c3-579a-4825-b5fe-c349bc6999fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.728837 kubelet[2465]: E0909 00:36:57.728697 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.728837 kubelet[2465]: E0909 00:36:57.728744 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f48cr" Sep 9 00:36:57.728837 kubelet[2465]: E0909 00:36:57.728764 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f48cr" Sep 9 00:36:57.728943 kubelet[2465]: E0909 00:36:57.728799 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f48cr_kube-system(f85e14c3-579a-4825-b5fe-c349bc6999fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f48cr_kube-system(f85e14c3-579a-4825-b5fe-c349bc6999fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f48cr" podUID="f85e14c3-579a-4825-b5fe-c349bc6999fc" Sep 9 00:36:57.784219 containerd[1438]: time="2025-09-09T00:36:57.783861303Z" level=error msg="Failed to destroy network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.785137 containerd[1438]: time="2025-09-09T00:36:57.785099350Z" level=error msg="encountered an error cleaning up failed sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.785210 containerd[1438]: time="2025-09-09T00:36:57.785171211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8ff9d7b9-lggcj,Uid:c5b7c42d-cee3-4af6-9b63-496fe2a31687,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.785498 kubelet[2465]: E0909 00:36:57.785407 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.785498 kubelet[2465]: E0909 00:36:57.785461 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f8ff9d7b9-lggcj" Sep 9 00:36:57.785498 kubelet[2465]: E0909 00:36:57.785480 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f8ff9d7b9-lggcj" Sep 9 00:36:57.785613 kubelet[2465]: E0909 00:36:57.785520 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f8ff9d7b9-lggcj_calico-system(c5b7c42d-cee3-4af6-9b63-496fe2a31687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f8ff9d7b9-lggcj_calico-system(c5b7c42d-cee3-4af6-9b63-496fe2a31687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f8ff9d7b9-lggcj" podUID="c5b7c42d-cee3-4af6-9b63-496fe2a31687" Sep 9 00:36:57.851010 containerd[1438]: time="2025-09-09T00:36:57.849662958Z" level=error msg="Failed to destroy network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.851010 containerd[1438]: time="2025-09-09T00:36:57.850263856Z" level=error msg="encountered an error cleaning up failed sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.851010 containerd[1438]: time="2025-09-09T00:36:57.850314151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-j9zf9,Uid:f592feec-dcfc-407a-bf1e-672a950349d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.852416 kubelet[2465]: E0909 00:36:57.850517 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.852416 kubelet[2465]: E0909 00:36:57.850579 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d549548b-j9zf9" Sep 9 00:36:57.852416 kubelet[2465]: E0909 00:36:57.850607 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d549548b-j9zf9" Sep 9 00:36:57.852524 kubelet[2465]: E0909 00:36:57.850649 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d549548b-j9zf9_calico-apiserver(f592feec-dcfc-407a-bf1e-672a950349d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d549548b-j9zf9_calico-apiserver(f592feec-dcfc-407a-bf1e-672a950349d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d549548b-j9zf9" podUID="f592feec-dcfc-407a-bf1e-672a950349d6" Sep 9 00:36:57.854985 containerd[1438]: time="2025-09-09T00:36:57.854941162Z" level=error msg="Failed to destroy network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.855307 containerd[1438]: time="2025-09-09T00:36:57.855276941Z" level=error msg="encountered an error cleaning up failed sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.855345 containerd[1438]: time="2025-09-09T00:36:57.855327156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4886c776-cnp84,Uid:4ddd1c7a-7179-4302-aedf-d6664fe29be7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.855931 kubelet[2465]: E0909 00:36:57.855522 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.855931 kubelet[2465]: E0909 00:36:57.855578 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4886c776-cnp84" Sep 9 00:36:57.855931 kubelet[2465]: E0909 00:36:57.855596 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4886c776-cnp84" Sep 9 00:36:57.856231 kubelet[2465]: E0909 00:36:57.855644 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d4886c776-cnp84_calico-system(4ddd1c7a-7179-4302-aedf-d6664fe29be7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d4886c776-cnp84_calico-system(4ddd1c7a-7179-4302-aedf-d6664fe29be7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4886c776-cnp84" podUID="4ddd1c7a-7179-4302-aedf-d6664fe29be7" Sep 9 00:36:57.857309 containerd[1438]: time="2025-09-09T00:36:57.856831562Z" level=error msg="Failed to destroy network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.857710 containerd[1438]: time="2025-09-09T00:36:57.857678293Z" level=error msg="encountered an error cleaning up failed sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.857849 containerd[1438]: time="2025-09-09T00:36:57.857823616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-pkrjv,Uid:a376016b-b8d6-4e3c-a75d-da673ae09400,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.858076 kubelet[2465]: E0909 00:36:57.858040 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.858129 kubelet[2465]: E0909 00:36:57.858106 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d549548b-pkrjv" Sep 9 00:36:57.858165 kubelet[2465]: E0909 00:36:57.858127 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d549548b-pkrjv" Sep 9 00:36:57.858192 kubelet[2465]: E0909 00:36:57.858165 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d549548b-pkrjv_calico-apiserver(a376016b-b8d6-4e3c-a75d-da673ae09400)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d549548b-pkrjv_calico-apiserver(a376016b-b8d6-4e3c-a75d-da673ae09400)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d549548b-pkrjv" podUID="a376016b-b8d6-4e3c-a75d-da673ae09400" Sep 9 00:36:57.858891 containerd[1438]: time="2025-09-09T00:36:57.858858922Z" level=error msg="Failed to destroy network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.859285 containerd[1438]: time="2025-09-09T00:36:57.859224871Z" level=error msg="encountered an error cleaning up failed sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.859802 containerd[1438]: time="2025-09-09T00:36:57.859772793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qpjc4,Uid:bbe52a64-481a-4444-8cd9-6a232570debc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.860188 kubelet[2465]: E0909 00:36:57.860012 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:57.860188 kubelet[2465]: E0909 00:36:57.860070 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qpjc4" Sep 9 00:36:57.860188 kubelet[2465]: E0909 00:36:57.860090 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qpjc4" Sep 9 00:36:57.860305 kubelet[2465]: E0909 00:36:57.860135 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-qpjc4_calico-system(bbe52a64-481a-4444-8cd9-6a232570debc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-qpjc4_calico-system(bbe52a64-481a-4444-8cd9-6a232570debc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qpjc4" podUID="bbe52a64-481a-4444-8cd9-6a232570debc" Sep 9 00:36:58.274781 systemd[1]: Created slice kubepods-besteffort-pod5b22ce9a_63bb_41fc_bc0b_217e50145ea4.slice - libcontainer container kubepods-besteffort-pod5b22ce9a_63bb_41fc_bc0b_217e50145ea4.slice. Sep 9 00:36:58.277645 containerd[1438]: time="2025-09-09T00:36:58.277286027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bczn,Uid:5b22ce9a-63bb-41fc-bc0b-217e50145ea4,Namespace:calico-system,Attempt:0,}" Sep 9 00:36:58.346476 containerd[1438]: time="2025-09-09T00:36:58.346395356Z" level=error msg="Failed to destroy network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.349682 containerd[1438]: time="2025-09-09T00:36:58.349605710Z" level=error msg="encountered an error cleaning up failed sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.350111 containerd[1438]: time="2025-09-09T00:36:58.349986059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bczn,Uid:5b22ce9a-63bb-41fc-bc0b-217e50145ea4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.350378 kubelet[2465]: E0909 00:36:58.350224 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.350755 kubelet[2465]: E0909 00:36:58.350505 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bczn" Sep 9 00:36:58.351581 kubelet[2465]: E0909 00:36:58.350836 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bczn" Sep 9 00:36:58.351783 kubelet[2465]: E0909 00:36:58.351718 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5bczn_calico-system(5b22ce9a-63bb-41fc-bc0b-217e50145ea4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5bczn_calico-system(5b22ce9a-63bb-41fc-bc0b-217e50145ea4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5bczn" podUID="5b22ce9a-63bb-41fc-bc0b-217e50145ea4" Sep 9 00:36:58.383417 kubelet[2465]: I0909 00:36:58.383157 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:36:58.384423 containerd[1438]: time="2025-09-09T00:36:58.384329843Z" level=info msg="StopPodSandbox for \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\"" Sep 9 00:36:58.384644 containerd[1438]: time="2025-09-09T00:36:58.384532541Z" level=info msg="Ensure that sandbox 6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1 in task-service has been cleanup successfully" Sep 9 00:36:58.385782 kubelet[2465]: I0909 00:36:58.385421 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:36:58.387111 containerd[1438]: time="2025-09-09T00:36:58.387074465Z" level=info msg="StopPodSandbox for \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\"" Sep 9 00:36:58.387265 containerd[1438]: time="2025-09-09T00:36:58.387243793Z" level=info msg="Ensure that sandbox 5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71 in task-service has been cleanup successfully" Sep 9 00:36:58.389745 kubelet[2465]: I0909 00:36:58.388382 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:36:58.389940 containerd[1438]: time="2025-09-09T00:36:58.388976487Z" level=info msg="StopPodSandbox for \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\"" Sep 9 00:36:58.389940 containerd[1438]: time="2025-09-09T00:36:58.389099122Z" level=info msg="Ensure that sandbox b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676 in task-service has been cleanup successfully" Sep 9 00:36:58.392953 kubelet[2465]: I0909 00:36:58.392927 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:36:58.394439 containerd[1438]: time="2025-09-09T00:36:58.393727400Z" level=info msg="StopPodSandbox for \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\"" Sep 9 00:36:58.394439 containerd[1438]: time="2025-09-09T00:36:58.393872442Z" level=info msg="Ensure that sandbox bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba in task-service has been cleanup successfully" Sep 9 00:36:58.394859 kubelet[2465]: I0909 00:36:58.394374 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:36:58.397355 kubelet[2465]: I0909 00:36:58.397230 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:36:58.399602 containerd[1438]: time="2025-09-09T00:36:58.399574706Z" level=info msg="StopPodSandbox for \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\"" Sep 9 00:36:58.400871 kubelet[2465]: I0909 00:36:58.400843 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:36:58.402207 containerd[1438]: time="2025-09-09T00:36:58.402079980Z" level=info msg="Ensure that sandbox 0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1 in task-service has been cleanup successfully" Sep 9 00:36:58.402619 containerd[1438]: time="2025-09-09T00:36:58.402430400Z" level=info msg="StopPodSandbox for \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\"" Sep 9 00:36:58.403372 containerd[1438]: time="2025-09-09T00:36:58.403225746Z" level=info msg="StopPodSandbox for \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\"" Sep 9 00:36:58.403372 containerd[1438]: time="2025-09-09T00:36:58.403362825Z" level=info msg="Ensure that sandbox 97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb in task-service has been cleanup successfully" Sep 9 00:36:58.405201 containerd[1438]: time="2025-09-09T00:36:58.405172741Z" level=info msg="Ensure that sandbox 15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e in task-service has been cleanup successfully" Sep 9 00:36:58.409135 kubelet[2465]: I0909 00:36:58.409103 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:36:58.410828 containerd[1438]: time="2025-09-09T00:36:58.410751771Z" level=info msg="StopPodSandbox for \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\"" Sep 9 00:36:58.412874 containerd[1438]: time="2025-09-09T00:36:58.412198303Z" level=info msg="Ensure that sandbox 2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d in task-service has been cleanup successfully" Sep 9 00:36:58.439258 containerd[1438]: time="2025-09-09T00:36:58.439091004Z" level=error msg="StopPodSandbox for \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\" failed" error="failed to destroy network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.440766 kubelet[2465]: E0909 00:36:58.440647 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:36:58.444073 kubelet[2465]: E0909 00:36:58.444005 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71"} Sep 9 00:36:58.444158 kubelet[2465]: E0909 00:36:58.444087 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbe52a64-481a-4444-8cd9-6a232570debc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.444158 kubelet[2465]: E0909 00:36:58.444111 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbe52a64-481a-4444-8cd9-6a232570debc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qpjc4" podUID="bbe52a64-481a-4444-8cd9-6a232570debc" Sep 9 00:36:58.450040 containerd[1438]: time="2025-09-09T00:36:58.449987349Z" level=error msg="StopPodSandbox for \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\" failed" error="failed to destroy network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.450529 kubelet[2465]: E0909 00:36:58.450456 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:36:58.450529 kubelet[2465]: E0909 00:36:58.450503 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba"} Sep 9 00:36:58.450674 kubelet[2465]: E0909 00:36:58.450534 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f592feec-dcfc-407a-bf1e-672a950349d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.450674 kubelet[2465]: E0909 00:36:58.450572 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f592feec-dcfc-407a-bf1e-672a950349d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d549548b-j9zf9" podUID="f592feec-dcfc-407a-bf1e-672a950349d6" Sep 9 00:36:58.458681 containerd[1438]: time="2025-09-09T00:36:58.458135430Z" level=error msg="StopPodSandbox for \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\" failed" error="failed to destroy network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.458895 kubelet[2465]: E0909 00:36:58.458346 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:36:58.458895 kubelet[2465]: E0909 00:36:58.458384 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676"} Sep 9 00:36:58.458895 kubelet[2465]: E0909 00:36:58.458415 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.458895 kubelet[2465]: E0909 00:36:58.458435 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f8ff9d7b9-lggcj" podUID="c5b7c42d-cee3-4af6-9b63-496fe2a31687" Sep 9 00:36:58.459997 containerd[1438]: time="2025-09-09T00:36:58.459953028Z" level=error msg="StopPodSandbox for \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\" failed" error="failed to destroy network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.460156 kubelet[2465]: E0909 00:36:58.460123 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:36:58.460199 kubelet[2465]: E0909 00:36:58.460177 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1"} Sep 9 00:36:58.460226 kubelet[2465]: E0909 00:36:58.460201 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b22ce9a-63bb-41fc-bc0b-217e50145ea4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.460266 kubelet[2465]: E0909 00:36:58.460233 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b22ce9a-63bb-41fc-bc0b-217e50145ea4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5bczn" podUID="5b22ce9a-63bb-41fc-bc0b-217e50145ea4" Sep 9 00:36:58.461531 containerd[1438]: time="2025-09-09T00:36:58.461500469Z" level=error msg="StopPodSandbox for \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\" failed" error="failed to destroy network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.462344 kubelet[2465]: E0909 00:36:58.462314 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:36:58.462410 kubelet[2465]: E0909 00:36:58.462347 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb"} Sep 9 00:36:58.462410 kubelet[2465]: E0909 00:36:58.462371 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ddd1c7a-7179-4302-aedf-d6664fe29be7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.462410 kubelet[2465]: E0909 00:36:58.462388 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ddd1c7a-7179-4302-aedf-d6664fe29be7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4886c776-cnp84" podUID="4ddd1c7a-7179-4302-aedf-d6664fe29be7" Sep 9 00:36:58.462612 containerd[1438]: time="2025-09-09T00:36:58.462585818Z" level=error msg="StopPodSandbox for \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\" failed" error="failed to destroy network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.463327 kubelet[2465]: E0909 00:36:58.463143 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:36:58.463327 kubelet[2465]: E0909 00:36:58.463180 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d"} Sep 9 00:36:58.463327 kubelet[2465]: E0909 00:36:58.463203 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af9f3d40-f917-4ead-8678-9e2ee2432935\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.463327 kubelet[2465]: E0909 00:36:58.463294 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af9f3d40-f917-4ead-8678-9e2ee2432935\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dzzmz" podUID="af9f3d40-f917-4ead-8678-9e2ee2432935" Sep 9 00:36:58.467921 containerd[1438]: time="2025-09-09T00:36:58.467883127Z" level=error msg="StopPodSandbox for \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\" failed" error="failed to destroy network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.468304 kubelet[2465]: E0909 00:36:58.468273 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:36:58.468413 kubelet[2465]: E0909 00:36:58.468395 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1"} Sep 9 00:36:58.468497 kubelet[2465]: E0909 00:36:58.468483 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f85e14c3-579a-4825-b5fe-c349bc6999fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.469417 kubelet[2465]: E0909 00:36:58.468670 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f85e14c3-579a-4825-b5fe-c349bc6999fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f48cr" podUID="f85e14c3-579a-4825-b5fe-c349bc6999fc" Sep 9 00:36:58.475248 containerd[1438]: time="2025-09-09T00:36:58.475206773Z" level=error msg="StopPodSandbox for \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\" failed" error="failed to destroy network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:36:58.475436 kubelet[2465]: E0909 00:36:58.475400 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:36:58.475478 kubelet[2465]: E0909 00:36:58.475447 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e"} Sep 9 00:36:58.475503 kubelet[2465]: E0909 00:36:58.475478 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a376016b-b8d6-4e3c-a75d-da673ae09400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:36:58.475572 kubelet[2465]: E0909 00:36:58.475498 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a376016b-b8d6-4e3c-a75d-da673ae09400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d549548b-pkrjv" podUID="a376016b-b8d6-4e3c-a75d-da673ae09400" Sep 9 00:37:01.553766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount609500028.mount: Deactivated successfully. Sep 9 00:37:01.819840 containerd[1438]: time="2025-09-09T00:37:01.819711809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:01.821077 containerd[1438]: time="2025-09-09T00:37:01.820994616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 00:37:01.821979 containerd[1438]: time="2025-09-09T00:37:01.821954980Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:01.824131 containerd[1438]: time="2025-09-09T00:37:01.824101768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:01.825201 containerd[1438]: time="2025-09-09T00:37:01.824696119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.44362311s" Sep 9 00:37:01.825201 containerd[1438]: time="2025-09-09T00:37:01.824728927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 00:37:01.852519 containerd[1438]: time="2025-09-09T00:37:01.852472678Z" level=info msg="CreateContainer within sandbox \"01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:37:01.870578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090761173.mount: Deactivated successfully. Sep 9 00:37:01.873301 containerd[1438]: time="2025-09-09T00:37:01.873258096Z" level=info msg="CreateContainer within sandbox \"01471251e6cb957a9249a4b37c03a6d19d8fa223e0b2099bc2af1e409cb20679\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6bb4ca5b254af4513a59e6a32a313c7a60408c64c768858d9f1abc80965fb9f9\"" Sep 9 00:37:01.875213 containerd[1438]: time="2025-09-09T00:37:01.873720974Z" level=info msg="StartContainer for \"6bb4ca5b254af4513a59e6a32a313c7a60408c64c768858d9f1abc80965fb9f9\"" Sep 9 00:37:01.926733 systemd[1]: Started cri-containerd-6bb4ca5b254af4513a59e6a32a313c7a60408c64c768858d9f1abc80965fb9f9.scope - libcontainer container 6bb4ca5b254af4513a59e6a32a313c7a60408c64c768858d9f1abc80965fb9f9. Sep 9 00:37:01.954284 containerd[1438]: time="2025-09-09T00:37:01.954159674Z" level=info msg="StartContainer for \"6bb4ca5b254af4513a59e6a32a313c7a60408c64c768858d9f1abc80965fb9f9\" returns successfully" Sep 9 00:37:02.078780 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:37:02.078917 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:37:02.180939 containerd[1438]: time="2025-09-09T00:37:02.180881712Z" level=info msg="StopPodSandbox for \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\"" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.263 [INFO][3773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.265 [INFO][3773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" iface="eth0" netns="/var/run/netns/cni-f488cd4c-9f9d-9a96-3d85-80e602854243" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.266 [INFO][3773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" iface="eth0" netns="/var/run/netns/cni-f488cd4c-9f9d-9a96-3d85-80e602854243" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.266 [INFO][3773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" iface="eth0" netns="/var/run/netns/cni-f488cd4c-9f9d-9a96-3d85-80e602854243" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.267 [INFO][3773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.267 [INFO][3773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.331 [INFO][3782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.331 [INFO][3782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.331 [INFO][3782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.341 [WARNING][3782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.341 [INFO][3782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.343 [INFO][3782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:02.348822 containerd[1438]: 2025-09-09 00:37:02.345 [INFO][3773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:02.349650 containerd[1438]: time="2025-09-09T00:37:02.349182407Z" level=info msg="TearDown network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\" successfully" Sep 9 00:37:02.349650 containerd[1438]: time="2025-09-09T00:37:02.349226658Z" level=info msg="StopPodSandbox for \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\" returns successfully" Sep 9 00:37:02.399992 kubelet[2465]: I0909 00:37:02.399935 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74dmz\" (UniqueName: \"kubernetes.io/projected/c5b7c42d-cee3-4af6-9b63-496fe2a31687-kube-api-access-74dmz\") pod \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\" (UID: \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\") " Sep 9 00:37:02.400369 kubelet[2465]: I0909 00:37:02.400011 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-ca-bundle\") pod \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\" (UID: \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\") " Sep 9 00:37:02.400369 kubelet[2465]: I0909 00:37:02.400047 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-backend-key-pair\") pod \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\" (UID: \"c5b7c42d-cee3-4af6-9b63-496fe2a31687\") " Sep 9 00:37:02.410823 kubelet[2465]: I0909 00:37:02.410772 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b7c42d-cee3-4af6-9b63-496fe2a31687-kube-api-access-74dmz" (OuterVolumeSpecName: "kube-api-access-74dmz") pod "c5b7c42d-cee3-4af6-9b63-496fe2a31687" (UID: "c5b7c42d-cee3-4af6-9b63-496fe2a31687"). InnerVolumeSpecName "kube-api-access-74dmz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:37:02.410931 kubelet[2465]: I0909 00:37:02.410779 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c5b7c42d-cee3-4af6-9b63-496fe2a31687" (UID: "c5b7c42d-cee3-4af6-9b63-496fe2a31687"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:37:02.411370 kubelet[2465]: I0909 00:37:02.411342 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c5b7c42d-cee3-4af6-9b63-496fe2a31687" (UID: "c5b7c42d-cee3-4af6-9b63-496fe2a31687"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:37:02.423246 systemd[1]: Removed slice kubepods-besteffort-podc5b7c42d_cee3_4af6_9b63_496fe2a31687.slice - libcontainer container kubepods-besteffort-podc5b7c42d_cee3_4af6_9b63_496fe2a31687.slice. Sep 9 00:37:02.441165 kubelet[2465]: I0909 00:37:02.441029 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jbvp9" podStartSLOduration=1.078861869 podStartE2EDuration="12.441013164s" podCreationTimestamp="2025-09-09 00:36:50 +0000 UTC" firstStartedPulling="2025-09-09 00:36:50.480836726 +0000 UTC m=+22.304036170" lastFinishedPulling="2025-09-09 00:37:01.842987981 +0000 UTC m=+33.666187465" observedRunningTime="2025-09-09 00:37:02.440695046 +0000 UTC m=+34.263894530" watchObservedRunningTime="2025-09-09 00:37:02.441013164 +0000 UTC m=+34.264212648" Sep 9 00:37:02.500891 kubelet[2465]: I0909 00:37:02.500853 2465 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-74dmz\" (UniqueName: \"kubernetes.io/projected/c5b7c42d-cee3-4af6-9b63-496fe2a31687-kube-api-access-74dmz\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:02.500891 kubelet[2465]: I0909 00:37:02.500884 2465 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:02.500891 kubelet[2465]: I0909 00:37:02.500896 2465 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5b7c42d-cee3-4af6-9b63-496fe2a31687-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:02.540512 systemd[1]: Created slice kubepods-besteffort-pod8f395edc_4241_4a6b_be35_5ef084861ddb.slice - libcontainer container kubepods-besteffort-pod8f395edc_4241_4a6b_be35_5ef084861ddb.slice. Sep 9 00:37:02.554678 systemd[1]: run-netns-cni\x2df488cd4c\x2d9f9d\x2d9a96\x2d3d85\x2d80e602854243.mount: Deactivated successfully. Sep 9 00:37:02.554763 systemd[1]: var-lib-kubelet-pods-c5b7c42d\x2dcee3\x2d4af6\x2d9b63\x2d496fe2a31687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74dmz.mount: Deactivated successfully. Sep 9 00:37:02.554817 systemd[1]: var-lib-kubelet-pods-c5b7c42d\x2dcee3\x2d4af6\x2d9b63\x2d496fe2a31687-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:37:02.601428 kubelet[2465]: I0909 00:37:02.601294 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wddw\" (UniqueName: \"kubernetes.io/projected/8f395edc-4241-4a6b-be35-5ef084861ddb-kube-api-access-6wddw\") pod \"whisker-7447db769b-m2cdc\" (UID: \"8f395edc-4241-4a6b-be35-5ef084861ddb\") " pod="calico-system/whisker-7447db769b-m2cdc" Sep 9 00:37:02.601428 kubelet[2465]: I0909 00:37:02.601342 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f395edc-4241-4a6b-be35-5ef084861ddb-whisker-backend-key-pair\") pod \"whisker-7447db769b-m2cdc\" (UID: \"8f395edc-4241-4a6b-be35-5ef084861ddb\") " pod="calico-system/whisker-7447db769b-m2cdc" Sep 9 00:37:02.601428 kubelet[2465]: I0909 00:37:02.601367 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f395edc-4241-4a6b-be35-5ef084861ddb-whisker-ca-bundle\") pod \"whisker-7447db769b-m2cdc\" (UID: \"8f395edc-4241-4a6b-be35-5ef084861ddb\") " pod="calico-system/whisker-7447db769b-m2cdc" Sep 9 00:37:02.843768 containerd[1438]: time="2025-09-09T00:37:02.843674969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7447db769b-m2cdc,Uid:8f395edc-4241-4a6b-be35-5ef084861ddb,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:02.966172 systemd-networkd[1372]: cali58f9459b33d: Link UP Sep 9 00:37:02.966360 systemd-networkd[1372]: cali58f9459b33d: Gained carrier Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.875 [INFO][3804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.889 [INFO][3804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7447db769b--m2cdc-eth0 whisker-7447db769b- calico-system 8f395edc-4241-4a6b-be35-5ef084861ddb 897 0 2025-09-09 00:37:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7447db769b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7447db769b-m2cdc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali58f9459b33d [] [] }} ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.889 [INFO][3804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-eth0" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.914 [INFO][3819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" HandleID="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Workload="localhost-k8s-whisker--7447db769b--m2cdc-eth0" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.914 [INFO][3819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" HandleID="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Workload="localhost-k8s-whisker--7447db769b--m2cdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7447db769b-m2cdc", "timestamp":"2025-09-09 00:37:02.914648954 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.914 [INFO][3819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.914 [INFO][3819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.914 [INFO][3819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.929 [INFO][3819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.935 [INFO][3819] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.942 [INFO][3819] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.944 [INFO][3819] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.946 [INFO][3819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.946 [INFO][3819] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.948 [INFO][3819] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.952 [INFO][3819] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.956 [INFO][3819] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.956 [INFO][3819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" host="localhost" Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.956 [INFO][3819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:02.979182 containerd[1438]: 2025-09-09 00:37:02.956 [INFO][3819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" HandleID="k8s-pod-network.01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Workload="localhost-k8s-whisker--7447db769b--m2cdc-eth0" Sep 9 00:37:02.979850 containerd[1438]: 2025-09-09 00:37:02.958 [INFO][3804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7447db769b--m2cdc-eth0", GenerateName:"whisker-7447db769b-", Namespace:"calico-system", SelfLink:"", UID:"8f395edc-4241-4a6b-be35-5ef084861ddb", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7447db769b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7447db769b-m2cdc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58f9459b33d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:02.979850 containerd[1438]: 2025-09-09 00:37:02.958 [INFO][3804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-eth0" Sep 9 00:37:02.979850 containerd[1438]: 2025-09-09 00:37:02.958 [INFO][3804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58f9459b33d ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-eth0" Sep 9 00:37:02.979850 containerd[1438]: 2025-09-09 00:37:02.966 [INFO][3804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-eth0" Sep 9 00:37:02.979850 containerd[1438]: 2025-09-09 00:37:02.966 [INFO][3804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7447db769b--m2cdc-eth0", GenerateName:"whisker-7447db769b-", Namespace:"calico-system", SelfLink:"", UID:"8f395edc-4241-4a6b-be35-5ef084861ddb", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7447db769b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a", Pod:"whisker-7447db769b-m2cdc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58f9459b33d", MAC:"42:b9:0c:57:c0:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:02.979850 containerd[1438]: 2025-09-09 00:37:02.977 [INFO][3804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a" Namespace="calico-system" Pod="whisker-7447db769b-m2cdc" WorkloadEndpoint="localhost-k8s-whisker--7447db769b--m2cdc-eth0" Sep 9 00:37:02.996039 containerd[1438]: time="2025-09-09T00:37:02.995932876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:02.996039 containerd[1438]: time="2025-09-09T00:37:02.996006734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:02.996039 containerd[1438]: time="2025-09-09T00:37:02.996018337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:02.996475 containerd[1438]: time="2025-09-09T00:37:02.996429918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:03.014706 systemd[1]: Started cri-containerd-01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a.scope - libcontainer container 01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a. Sep 9 00:37:03.023989 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:03.045280 containerd[1438]: time="2025-09-09T00:37:03.045206353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7447db769b-m2cdc,Uid:8f395edc-4241-4a6b-be35-5ef084861ddb,Namespace:calico-system,Attempt:0,} returns sandbox id \"01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a\"" Sep 9 00:37:03.046743 containerd[1438]: time="2025-09-09T00:37:03.046685585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:37:04.269635 containerd[1438]: time="2025-09-09T00:37:04.269594680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:04.270069 kubelet[2465]: I0909 00:37:04.269762 2465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b7c42d-cee3-4af6-9b63-496fe2a31687" path="/var/lib/kubelet/pods/c5b7c42d-cee3-4af6-9b63-496fe2a31687/volumes" Sep 9 00:37:04.270368 containerd[1438]: time="2025-09-09T00:37:04.270339011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 00:37:04.271398 containerd[1438]: time="2025-09-09T00:37:04.271354005Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:04.275876 containerd[1438]: time="2025-09-09T00:37:04.275837637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:04.276616 containerd[1438]: time="2025-09-09T00:37:04.276591410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.22984177s" Sep 9 00:37:04.276651 containerd[1438]: time="2025-09-09T00:37:04.276621017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 00:37:04.290468 containerd[1438]: time="2025-09-09T00:37:04.290421552Z" level=info msg="CreateContainer within sandbox \"01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:37:04.302660 containerd[1438]: time="2025-09-09T00:37:04.302608917Z" level=info msg="CreateContainer within sandbox \"01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"253ad25674d1d149351fe0bdda322c91b4177c86a7079e469cdbe4e935359eaf\"" Sep 9 00:37:04.303560 containerd[1438]: time="2025-09-09T00:37:04.303362490Z" level=info msg="StartContainer for \"253ad25674d1d149351fe0bdda322c91b4177c86a7079e469cdbe4e935359eaf\"" Sep 9 00:37:04.332716 systemd[1]: Started cri-containerd-253ad25674d1d149351fe0bdda322c91b4177c86a7079e469cdbe4e935359eaf.scope - libcontainer container 253ad25674d1d149351fe0bdda322c91b4177c86a7079e469cdbe4e935359eaf. Sep 9 00:37:04.362900 containerd[1438]: time="2025-09-09T00:37:04.362799248Z" level=info msg="StartContainer for \"253ad25674d1d149351fe0bdda322c91b4177c86a7079e469cdbe4e935359eaf\" returns successfully" Sep 9 00:37:04.364039 containerd[1438]: time="2025-09-09T00:37:04.364006765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:37:04.619648 systemd-networkd[1372]: cali58f9459b33d: Gained IPv6LL Sep 9 00:37:05.946883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107387033.mount: Deactivated successfully. Sep 9 00:37:05.961589 containerd[1438]: time="2025-09-09T00:37:05.961531783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:05.962198 containerd[1438]: time="2025-09-09T00:37:05.962165244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 00:37:05.963068 containerd[1438]: time="2025-09-09T00:37:05.963030517Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:05.965492 containerd[1438]: time="2025-09-09T00:37:05.965462099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:05.966564 containerd[1438]: time="2025-09-09T00:37:05.966345776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.602303682s" Sep 9 00:37:05.966564 containerd[1438]: time="2025-09-09T00:37:05.966385625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 00:37:05.970655 containerd[1438]: time="2025-09-09T00:37:05.970612407Z" level=info msg="CreateContainer within sandbox \"01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:37:05.985287 containerd[1438]: time="2025-09-09T00:37:05.985215822Z" level=info msg="CreateContainer within sandbox \"01af13b735299371ad421e1d7568fdfd0e4fca2109441994fb62fc0614c1e51a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e40f214c5624538a78f29faf453c779c079406926b2ff7e8891924ab73ae3192\"" Sep 9 00:37:05.985813 containerd[1438]: time="2025-09-09T00:37:05.985781868Z" level=info msg="StartContainer for \"e40f214c5624538a78f29faf453c779c079406926b2ff7e8891924ab73ae3192\"" Sep 9 00:37:06.047770 systemd[1]: Started cri-containerd-e40f214c5624538a78f29faf453c779c079406926b2ff7e8891924ab73ae3192.scope - libcontainer container e40f214c5624538a78f29faf453c779c079406926b2ff7e8891924ab73ae3192. Sep 9 00:37:06.078581 containerd[1438]: time="2025-09-09T00:37:06.078502931Z" level=info msg="StartContainer for \"e40f214c5624538a78f29faf453c779c079406926b2ff7e8891924ab73ae3192\" returns successfully" Sep 9 00:37:10.268132 containerd[1438]: time="2025-09-09T00:37:10.267404621Z" level=info msg="StopPodSandbox for \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\"" Sep 9 00:37:10.268132 containerd[1438]: time="2025-09-09T00:37:10.267658070Z" level=info msg="StopPodSandbox for \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\"" Sep 9 00:37:10.268132 containerd[1438]: time="2025-09-09T00:37:10.267823142Z" level=info msg="StopPodSandbox for \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\"" Sep 9 00:37:10.319271 kubelet[2465]: I0909 00:37:10.319193 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7447db769b-m2cdc" podStartSLOduration=5.398330268 podStartE2EDuration="8.319176449s" podCreationTimestamp="2025-09-09 00:37:02 +0000 UTC" firstStartedPulling="2025-09-09 00:37:03.046349305 +0000 UTC m=+34.869548789" lastFinishedPulling="2025-09-09 00:37:05.967195486 +0000 UTC m=+37.790394970" observedRunningTime="2025-09-09 00:37:06.441077718 +0000 UTC m=+38.264277202" watchObservedRunningTime="2025-09-09 00:37:10.319176449 +0000 UTC m=+42.142375933" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.321 [INFO][4337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.321 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" iface="eth0" netns="/var/run/netns/cni-0ff6e5ff-d0cf-ec3b-75a3-71ee7d132235" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.321 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" iface="eth0" netns="/var/run/netns/cni-0ff6e5ff-d0cf-ec3b-75a3-71ee7d132235" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.322 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" iface="eth0" netns="/var/run/netns/cni-0ff6e5ff-d0cf-ec3b-75a3-71ee7d132235" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.323 [INFO][4337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.323 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.354 [INFO][4366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.354 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.354 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.363 [WARNING][4366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.363 [INFO][4366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.365 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:10.372904 containerd[1438]: 2025-09-09 00:37:10.369 [INFO][4337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:10.373429 containerd[1438]: time="2025-09-09T00:37:10.373308412Z" level=info msg="TearDown network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\" successfully" Sep 9 00:37:10.373429 containerd[1438]: time="2025-09-09T00:37:10.373339738Z" level=info msg="StopPodSandbox for \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\" returns successfully" Sep 9 00:37:10.373855 kubelet[2465]: E0909 00:37:10.373809 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:10.374988 systemd[1]: run-netns-cni\x2d0ff6e5ff\x2dd0cf\x2dec3b\x2d75a3\x2d71ee7d132235.mount: Deactivated successfully. Sep 9 00:37:10.375313 containerd[1438]: time="2025-09-09T00:37:10.375244785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f48cr,Uid:f85e14c3-579a-4825-b5fe-c349bc6999fc,Namespace:kube-system,Attempt:1,}" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.332 [INFO][4347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.332 [INFO][4347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" iface="eth0" netns="/var/run/netns/cni-5365dd7b-0814-232b-2410-18cf7576bdc7" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.333 [INFO][4347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" iface="eth0" netns="/var/run/netns/cni-5365dd7b-0814-232b-2410-18cf7576bdc7" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.335 [INFO][4347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" iface="eth0" netns="/var/run/netns/cni-5365dd7b-0814-232b-2410-18cf7576bdc7" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.335 [INFO][4347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.335 [INFO][4347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.360 [INFO][4373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.360 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.365 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.375 [WARNING][4373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.375 [INFO][4373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.378 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:10.381963 containerd[1438]: 2025-09-09 00:37:10.379 [INFO][4347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:10.382532 containerd[1438]: time="2025-09-09T00:37:10.382431052Z" level=info msg="TearDown network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\" successfully" Sep 9 00:37:10.382532 containerd[1438]: time="2025-09-09T00:37:10.382452376Z" level=info msg="StopPodSandbox for \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\" returns successfully" Sep 9 00:37:10.383332 containerd[1438]: time="2025-09-09T00:37:10.383016605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-j9zf9,Uid:f592feec-dcfc-407a-bf1e-672a950349d6,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:37:10.384694 systemd[1]: run-netns-cni\x2d5365dd7b\x2d0814\x2d232b\x2d2410\x2d18cf7576bdc7.mount: Deactivated successfully. Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.345 [INFO][4338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.345 [INFO][4338] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" iface="eth0" netns="/var/run/netns/cni-e81e0cec-d6e6-5fd7-35da-d35932916cc8" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.346 [INFO][4338] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" iface="eth0" netns="/var/run/netns/cni-e81e0cec-d6e6-5fd7-35da-d35932916cc8" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.346 [INFO][4338] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" iface="eth0" netns="/var/run/netns/cni-e81e0cec-d6e6-5fd7-35da-d35932916cc8" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.346 [INFO][4338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.346 [INFO][4338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.387 [INFO][4381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.387 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.389 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.403 [WARNING][4381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.403 [INFO][4381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.405 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:10.409350 containerd[1438]: 2025-09-09 00:37:10.407 [INFO][4338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:10.410248 containerd[1438]: time="2025-09-09T00:37:10.409548443Z" level=info msg="TearDown network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\" successfully" Sep 9 00:37:10.410248 containerd[1438]: time="2025-09-09T00:37:10.409573208Z" level=info msg="StopPodSandbox for \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\" returns successfully" Sep 9 00:37:10.410875 containerd[1438]: time="2025-09-09T00:37:10.410487584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qpjc4,Uid:bbe52a64-481a-4444-8cd9-6a232570debc,Namespace:calico-system,Attempt:1,}" Sep 9 00:37:10.413212 systemd[1]: run-netns-cni\x2de81e0cec\x2dd6e6\x2d5fd7\x2d35da\x2dd35932916cc8.mount: Deactivated successfully. Sep 9 00:37:10.514864 systemd-networkd[1372]: cali3fd083c7c6d: Link UP Sep 9 00:37:10.518749 systemd-networkd[1372]: cali3fd083c7c6d: Gained carrier Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.421 [INFO][4393] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.437 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--f48cr-eth0 coredns-674b8bbfcf- kube-system f85e14c3-579a-4825-b5fe-c349bc6999fc 939 0 2025-09-09 00:36:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-f48cr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3fd083c7c6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.437 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.466 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" HandleID="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.466 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" HandleID="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011aac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-f48cr", "timestamp":"2025-09-09 00:37:10.466219496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.466 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.466 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.466 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.481 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.486 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.489 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.494 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.496 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.496 [INFO][4434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.498 [INFO][4434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95 Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.502 [INFO][4434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.506 [INFO][4434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.506 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" host="localhost" Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.506 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:10.532777 containerd[1438]: 2025-09-09 00:37:10.506 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" HandleID="k8s-pod-network.f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.533287 containerd[1438]: 2025-09-09 00:37:10.509 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f48cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f85e14c3-579a-4825-b5fe-c349bc6999fc", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-f48cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fd083c7c6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:10.533287 containerd[1438]: 2025-09-09 00:37:10.509 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.533287 containerd[1438]: 2025-09-09 00:37:10.509 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fd083c7c6d ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.533287 containerd[1438]: 2025-09-09 00:37:10.519 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.533287 containerd[1438]: 2025-09-09 00:37:10.519 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f48cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f85e14c3-579a-4825-b5fe-c349bc6999fc", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95", Pod:"coredns-674b8bbfcf-f48cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fd083c7c6d", MAC:"b2:0b:cc:91:0e:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:10.533287 containerd[1438]: 2025-09-09 00:37:10.529 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95" Namespace="kube-system" Pod="coredns-674b8bbfcf-f48cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:10.546739 containerd[1438]: time="2025-09-09T00:37:10.546450254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:10.546739 containerd[1438]: time="2025-09-09T00:37:10.546499903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:10.546739 containerd[1438]: time="2025-09-09T00:37:10.546510345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:10.546739 containerd[1438]: time="2025-09-09T00:37:10.546598442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:10.573782 systemd[1]: Started cri-containerd-f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95.scope - libcontainer container f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95. Sep 9 00:37:10.585099 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:10.609626 containerd[1438]: time="2025-09-09T00:37:10.609588794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f48cr,Uid:f85e14c3-579a-4825-b5fe-c349bc6999fc,Namespace:kube-system,Attempt:1,} returns sandbox id \"f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95\"" Sep 9 00:37:10.611800 kubelet[2465]: E0909 00:37:10.610572 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:10.615362 systemd-networkd[1372]: cali81508c6f846: Link UP Sep 9 00:37:10.616156 systemd-networkd[1372]: cali81508c6f846: Gained carrier Sep 9 00:37:10.618346 containerd[1438]: time="2025-09-09T00:37:10.618177931Z" level=info msg="CreateContainer within sandbox \"f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.447 [INFO][4409] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.466 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--qpjc4-eth0 goldmane-54d579b49d- calico-system bbe52a64-481a-4444-8cd9-6a232570debc 941 0 2025-09-09 00:36:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-qpjc4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali81508c6f846 [] [] }} ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.466 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.498 [INFO][4447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" HandleID="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.498 [INFO][4447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" HandleID="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-qpjc4", "timestamp":"2025-09-09 00:37:10.498297924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.498 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.506 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.506 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.581 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.587 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.592 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.595 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.597 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.597 [INFO][4447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.599 [INFO][4447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8 Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.603 [INFO][4447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.609 [INFO][4447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.609 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" host="localhost" Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.609 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:10.632379 containerd[1438]: 2025-09-09 00:37:10.609 [INFO][4447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" HandleID="k8s-pod-network.9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.632947 containerd[1438]: 2025-09-09 00:37:10.613 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qpjc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bbe52a64-481a-4444-8cd9-6a232570debc", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-qpjc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81508c6f846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:10.632947 containerd[1438]: 2025-09-09 00:37:10.614 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.632947 containerd[1438]: 2025-09-09 00:37:10.614 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81508c6f846 ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.632947 containerd[1438]: 2025-09-09 00:37:10.615 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.632947 containerd[1438]: 2025-09-09 00:37:10.615 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qpjc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bbe52a64-481a-4444-8cd9-6a232570debc", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8", Pod:"goldmane-54d579b49d-qpjc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81508c6f846", MAC:"6e:8b:e9:cb:a9:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:10.632947 containerd[1438]: 2025-09-09 00:37:10.630 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8" Namespace="calico-system" Pod="goldmane-54d579b49d-qpjc4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:10.641704 containerd[1438]: time="2025-09-09T00:37:10.641654700Z" level=info msg="CreateContainer within sandbox \"f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2de34cf535869e919a7a7ca1cd22f53b2bbb91bef5ad140d5b5fedc8d584617b\"" Sep 9 00:37:10.642590 containerd[1438]: time="2025-09-09T00:37:10.642206167Z" level=info msg="StartContainer for \"2de34cf535869e919a7a7ca1cd22f53b2bbb91bef5ad140d5b5fedc8d584617b\"" Sep 9 00:37:10.664565 containerd[1438]: time="2025-09-09T00:37:10.664131717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:10.664565 containerd[1438]: time="2025-09-09T00:37:10.664503108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:10.664565 containerd[1438]: time="2025-09-09T00:37:10.664552358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:10.664819 containerd[1438]: time="2025-09-09T00:37:10.664650577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:10.666699 systemd[1]: Started cri-containerd-2de34cf535869e919a7a7ca1cd22f53b2bbb91bef5ad140d5b5fedc8d584617b.scope - libcontainer container 2de34cf535869e919a7a7ca1cd22f53b2bbb91bef5ad140d5b5fedc8d584617b. Sep 9 00:37:10.683714 systemd[1]: Started cri-containerd-9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8.scope - libcontainer container 9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8. Sep 9 00:37:10.727435 containerd[1438]: time="2025-09-09T00:37:10.727377678Z" level=info msg="StartContainer for \"2de34cf535869e919a7a7ca1cd22f53b2bbb91bef5ad140d5b5fedc8d584617b\" returns successfully" Sep 9 00:37:10.743158 systemd-networkd[1372]: cali1bb5c0efd9e: Link UP Sep 9 00:37:10.746223 systemd-networkd[1372]: cali1bb5c0efd9e: Gained carrier Sep 9 00:37:10.756532 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.473 [INFO][4423] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.487 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0 calico-apiserver-78d549548b- calico-apiserver f592feec-dcfc-407a-bf1e-672a950349d6 940 0 2025-09-09 00:36:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d549548b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78d549548b-j9zf9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1bb5c0efd9e [] [] }} ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.487 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.521 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" HandleID="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.522 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" HandleID="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001365e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78d549548b-j9zf9", "timestamp":"2025-09-09 00:37:10.521954008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.522 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.612 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.612 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.682 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.690 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.697 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.701 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.706 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.707 [INFO][4454] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.711 [INFO][4454] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.717 [INFO][4454] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.733 [INFO][4454] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.733 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" host="localhost" Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.733 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:10.763299 containerd[1438]: 2025-09-09 00:37:10.733 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" HandleID="k8s-pod-network.fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.765090 containerd[1438]: 2025-09-09 00:37:10.738 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f592feec-dcfc-407a-bf1e-672a950349d6", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78d549548b-j9zf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bb5c0efd9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:10.765090 containerd[1438]: 2025-09-09 00:37:10.738 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.765090 containerd[1438]: 2025-09-09 00:37:10.738 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bb5c0efd9e ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.765090 containerd[1438]: 2025-09-09 00:37:10.748 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.765090 containerd[1438]: 2025-09-09 00:37:10.750 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f592feec-dcfc-407a-bf1e-672a950349d6", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a", Pod:"calico-apiserver-78d549548b-j9zf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bb5c0efd9e", MAC:"ce:03:60:fc:36:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:10.765090 containerd[1438]: 2025-09-09 00:37:10.759 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-j9zf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:10.786054 containerd[1438]: time="2025-09-09T00:37:10.785907609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:10.786054 containerd[1438]: time="2025-09-09T00:37:10.785959499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:10.786231 containerd[1438]: time="2025-09-09T00:37:10.786184063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:10.786376 containerd[1438]: time="2025-09-09T00:37:10.786343773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:10.793942 containerd[1438]: time="2025-09-09T00:37:10.793876107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qpjc4,Uid:bbe52a64-481a-4444-8cd9-6a232570debc,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8\"" Sep 9 00:37:10.804379 containerd[1438]: time="2025-09-09T00:37:10.804336165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:37:10.816744 systemd[1]: Started cri-containerd-fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a.scope - libcontainer container fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a. Sep 9 00:37:10.830679 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:10.855719 containerd[1438]: time="2025-09-09T00:37:10.855679390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-j9zf9,Uid:f592feec-dcfc-407a-bf1e-672a950349d6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a\"" Sep 9 00:37:11.268514 containerd[1438]: time="2025-09-09T00:37:11.268459825Z" level=info msg="StopPodSandbox for \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\"" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.312 [INFO][4680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.313 [INFO][4680] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" iface="eth0" netns="/var/run/netns/cni-38c3a1e2-65fe-fa60-d4d4-c992e934e6ef" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.313 [INFO][4680] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" iface="eth0" netns="/var/run/netns/cni-38c3a1e2-65fe-fa60-d4d4-c992e934e6ef" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.313 [INFO][4680] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" iface="eth0" netns="/var/run/netns/cni-38c3a1e2-65fe-fa60-d4d4-c992e934e6ef" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.313 [INFO][4680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.313 [INFO][4680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.335 [INFO][4688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.335 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.335 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.344 [WARNING][4688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.345 [INFO][4688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.346 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:11.352984 containerd[1438]: 2025-09-09 00:37:11.350 [INFO][4680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:11.353499 containerd[1438]: time="2025-09-09T00:37:11.353387952Z" level=info msg="TearDown network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\" successfully" Sep 9 00:37:11.353499 containerd[1438]: time="2025-09-09T00:37:11.353414917Z" level=info msg="StopPodSandbox for \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\" returns successfully" Sep 9 00:37:11.353757 kubelet[2465]: E0909 00:37:11.353732 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.354419 containerd[1438]: time="2025-09-09T00:37:11.354391060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dzzmz,Uid:af9f3d40-f917-4ead-8678-9e2ee2432935,Namespace:kube-system,Attempt:1,}" Sep 9 00:37:11.389744 systemd[1]: run-netns-cni\x2d38c3a1e2\x2d65fe\x2dfa60\x2dd4d4\x2dc992e934e6ef.mount: Deactivated successfully. Sep 9 00:37:11.446767 kubelet[2465]: E0909 00:37:11.446307 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.466880 kubelet[2465]: I0909 00:37:11.466811 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f48cr" podStartSLOduration=35.466792592 podStartE2EDuration="35.466792592s" podCreationTimestamp="2025-09-09 00:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:11.464059838 +0000 UTC m=+43.287259322" watchObservedRunningTime="2025-09-09 00:37:11.466792592 +0000 UTC m=+43.289992076" Sep 9 00:37:11.516514 systemd-networkd[1372]: calicb3c862558b: Link UP Sep 9 00:37:11.516709 systemd-networkd[1372]: calicb3c862558b: Gained carrier Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.387 [INFO][4696] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.404 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0 coredns-674b8bbfcf- kube-system af9f3d40-f917-4ead-8678-9e2ee2432935 960 0 2025-09-09 00:36:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dzzmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb3c862558b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.404 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.432 [INFO][4711] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" HandleID="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.432 [INFO][4711] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" HandleID="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dzzmz", "timestamp":"2025-09-09 00:37:11.432802962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.433 [INFO][4711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.433 [INFO][4711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.433 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.445 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.466 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.473 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.475 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.479 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.479 [INFO][4711] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.483 [INFO][4711] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658 Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.491 [INFO][4711] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.509 [INFO][4711] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.509 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" host="localhost" Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.509 [INFO][4711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:11.533145 containerd[1438]: 2025-09-09 00:37:11.509 [INFO][4711] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" HandleID="k8s-pod-network.5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.533770 containerd[1438]: 2025-09-09 00:37:11.512 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af9f3d40-f917-4ead-8678-9e2ee2432935", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dzzmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb3c862558b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:11.533770 containerd[1438]: 2025-09-09 00:37:11.513 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.533770 containerd[1438]: 2025-09-09 00:37:11.513 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb3c862558b ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.533770 containerd[1438]: 2025-09-09 00:37:11.514 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.533770 containerd[1438]: 2025-09-09 00:37:11.514 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af9f3d40-f917-4ead-8678-9e2ee2432935", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658", Pod:"coredns-674b8bbfcf-dzzmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb3c862558b", MAC:"ee:77:c5:3f:fd:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:11.533770 containerd[1438]: 2025-09-09 00:37:11.528 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658" Namespace="kube-system" Pod="coredns-674b8bbfcf-dzzmz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:11.604727 containerd[1438]: time="2025-09-09T00:37:11.603273171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:11.604727 containerd[1438]: time="2025-09-09T00:37:11.603721455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:11.604727 containerd[1438]: time="2025-09-09T00:37:11.603734937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:11.604727 containerd[1438]: time="2025-09-09T00:37:11.603888766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:11.622351 systemd[1]: run-containerd-runc-k8s.io-5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658-runc.YXO8IW.mount: Deactivated successfully. Sep 9 00:37:11.635765 systemd[1]: Started cri-containerd-5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658.scope - libcontainer container 5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658. Sep 9 00:37:11.646258 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:11.666655 containerd[1438]: time="2025-09-09T00:37:11.666581433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dzzmz,Uid:af9f3d40-f917-4ead-8678-9e2ee2432935,Namespace:kube-system,Attempt:1,} returns sandbox id \"5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658\"" Sep 9 00:37:11.668635 kubelet[2465]: E0909 00:37:11.668610 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.673031 containerd[1438]: time="2025-09-09T00:37:11.672986797Z" level=info msg="CreateContainer within sandbox \"5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:37:11.691869 containerd[1438]: time="2025-09-09T00:37:11.691821538Z" level=info msg="CreateContainer within sandbox \"5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40c83025e272c3d11db2ff0d90fccb220388862a5700d1addaf540cb4078471b\"" Sep 9 00:37:11.694781 containerd[1438]: time="2025-09-09T00:37:11.693227282Z" level=info msg="StartContainer for \"40c83025e272c3d11db2ff0d90fccb220388862a5700d1addaf540cb4078471b\"" Sep 9 00:37:11.719718 systemd[1]: Started cri-containerd-40c83025e272c3d11db2ff0d90fccb220388862a5700d1addaf540cb4078471b.scope - libcontainer container 40c83025e272c3d11db2ff0d90fccb220388862a5700d1addaf540cb4078471b. Sep 9 00:37:11.744053 containerd[1438]: time="2025-09-09T00:37:11.743880605Z" level=info msg="StartContainer for \"40c83025e272c3d11db2ff0d90fccb220388862a5700d1addaf540cb4078471b\" returns successfully" Sep 9 00:37:12.268070 containerd[1438]: time="2025-09-09T00:37:12.267638801Z" level=info msg="StopPodSandbox for \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\"" Sep 9 00:37:12.268070 containerd[1438]: time="2025-09-09T00:37:12.267772746Z" level=info msg="StopPodSandbox for \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\"" Sep 9 00:37:12.378918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129303577.mount: Deactivated successfully. Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.339 [INFO][4856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.339 [INFO][4856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" iface="eth0" netns="/var/run/netns/cni-322544ee-a371-c137-32ff-37ab1ec614d8" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.340 [INFO][4856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" iface="eth0" netns="/var/run/netns/cni-322544ee-a371-c137-32ff-37ab1ec614d8" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.340 [INFO][4856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" iface="eth0" netns="/var/run/netns/cni-322544ee-a371-c137-32ff-37ab1ec614d8" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.340 [INFO][4856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.340 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.370 [INFO][4871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.370 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.370 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.384 [WARNING][4871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.384 [INFO][4871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.386 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:12.390466 containerd[1438]: 2025-09-09 00:37:12.388 [INFO][4856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:12.391184 containerd[1438]: time="2025-09-09T00:37:12.390897966Z" level=info msg="TearDown network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\" successfully" Sep 9 00:37:12.391184 containerd[1438]: time="2025-09-09T00:37:12.390925571Z" level=info msg="StopPodSandbox for \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\" returns successfully" Sep 9 00:37:12.391715 containerd[1438]: time="2025-09-09T00:37:12.391558767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-pkrjv,Uid:a376016b-b8d6-4e3c-a75d-da673ae09400,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:37:12.395314 systemd[1]: run-netns-cni\x2d322544ee\x2da371\x2dc137\x2d32ff\x2d37ab1ec614d8.mount: Deactivated successfully. Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.341 [INFO][4857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.341 [INFO][4857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" iface="eth0" netns="/var/run/netns/cni-d883a852-313d-b0bd-817b-6a80606b862e" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.341 [INFO][4857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" iface="eth0" netns="/var/run/netns/cni-d883a852-313d-b0bd-817b-6a80606b862e" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.341 [INFO][4857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" iface="eth0" netns="/var/run/netns/cni-d883a852-313d-b0bd-817b-6a80606b862e" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.341 [INFO][4857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.341 [INFO][4857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.371 [INFO][4873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.371 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.386 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.396 [WARNING][4873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.397 [INFO][4873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.399 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:12.406048 containerd[1438]: 2025-09-09 00:37:12.403 [INFO][4857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:12.406399 containerd[1438]: time="2025-09-09T00:37:12.406230298Z" level=info msg="TearDown network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\" successfully" Sep 9 00:37:12.406399 containerd[1438]: time="2025-09-09T00:37:12.406252782Z" level=info msg="StopPodSandbox for \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\" returns successfully" Sep 9 00:37:12.407095 containerd[1438]: time="2025-09-09T00:37:12.407069532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bczn,Uid:5b22ce9a-63bb-41fc-bc0b-217e50145ea4,Namespace:calico-system,Attempt:1,}" Sep 9 00:37:12.410953 systemd[1]: run-netns-cni\x2dd883a852\x2d313d\x2db0bd\x2d817b\x2d6a80606b862e.mount: Deactivated successfully. Sep 9 00:37:12.458304 kubelet[2465]: E0909 00:37:12.457372 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:12.458304 kubelet[2465]: E0909 00:37:12.457578 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:12.478120 kubelet[2465]: I0909 00:37:12.478055 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dzzmz" podStartSLOduration=36.478027905 podStartE2EDuration="36.478027905s" podCreationTimestamp="2025-09-09 00:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:12.477478444 +0000 UTC m=+44.300677928" watchObservedRunningTime="2025-09-09 00:37:12.478027905 +0000 UTC m=+44.301227389" Sep 9 00:37:12.493548 systemd-networkd[1372]: cali81508c6f846: Gained IPv6LL Sep 9 00:37:12.556719 systemd-networkd[1372]: cali3fd083c7c6d: Gained IPv6LL Sep 9 00:37:12.575039 systemd-networkd[1372]: cali8e722e88a77: Link UP Sep 9 00:37:12.575418 systemd-networkd[1372]: cali8e722e88a77: Gained carrier Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.446 [INFO][4890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.463 [INFO][4890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0 calico-apiserver-78d549548b- calico-apiserver a376016b-b8d6-4e3c-a75d-da673ae09400 982 0 2025-09-09 00:36:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d549548b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78d549548b-pkrjv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e722e88a77 [] [] }} ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.463 [INFO][4890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.513 [INFO][4920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" HandleID="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.513 [INFO][4920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" HandleID="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78d549548b-pkrjv", "timestamp":"2025-09-09 00:37:12.513629914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.513 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.513 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.513 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.527 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.538 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.543 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.546 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.549 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.549 [INFO][4920] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.551 [INFO][4920] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799 Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.558 [INFO][4920] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.566 [INFO][4920] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.566 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" host="localhost" Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.566 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:12.592272 containerd[1438]: 2025-09-09 00:37:12.566 [INFO][4920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" HandleID="k8s-pod-network.c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.592907 containerd[1438]: 2025-09-09 00:37:12.569 [INFO][4890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a376016b-b8d6-4e3c-a75d-da673ae09400", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78d549548b-pkrjv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e722e88a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:12.592907 containerd[1438]: 2025-09-09 00:37:12.569 [INFO][4890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.592907 containerd[1438]: 2025-09-09 00:37:12.569 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e722e88a77 ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.592907 containerd[1438]: 2025-09-09 00:37:12.575 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.592907 containerd[1438]: 2025-09-09 00:37:12.577 [INFO][4890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a376016b-b8d6-4e3c-a75d-da673ae09400", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799", Pod:"calico-apiserver-78d549548b-pkrjv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e722e88a77", MAC:"f2:20:be:08:00:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:12.592907 containerd[1438]: 2025-09-09 00:37:12.589 [INFO][4890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799" Namespace="calico-apiserver" Pod="calico-apiserver-78d549548b-pkrjv" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:12.611512 containerd[1438]: time="2025-09-09T00:37:12.611257258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:12.611512 containerd[1438]: time="2025-09-09T00:37:12.611313269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:12.611512 containerd[1438]: time="2025-09-09T00:37:12.611331192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:12.611512 containerd[1438]: time="2025-09-09T00:37:12.611412967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:12.635728 systemd[1]: Started cri-containerd-c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799.scope - libcontainer container c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799. Sep 9 00:37:12.648155 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:12.678615 systemd-networkd[1372]: calide66a670e1a: Link UP Sep 9 00:37:12.679136 systemd-networkd[1372]: calide66a670e1a: Gained carrier Sep 9 00:37:12.683690 systemd-networkd[1372]: cali1bb5c0efd9e: Gained IPv6LL Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.464 [INFO][4901] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.487 [INFO][4901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5bczn-eth0 csi-node-driver- calico-system 5b22ce9a-63bb-41fc-bc0b-217e50145ea4 983 0 2025-09-09 00:36:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5bczn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calide66a670e1a [] [] }} ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.487 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.540 [INFO][4930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" HandleID="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.540 [INFO][4930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" HandleID="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5bczn", "timestamp":"2025-09-09 00:37:12.540085046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.540 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.566 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.566 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.630 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.639 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.650 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.652 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.655 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.655 [INFO][4930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.657 [INFO][4930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13 Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.661 [INFO][4930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.671 [INFO][4930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.671 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" host="localhost" Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.671 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:12.696352 containerd[1438]: 2025-09-09 00:37:12.671 [INFO][4930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" HandleID="k8s-pod-network.c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.696940 containerd[1438]: 2025-09-09 00:37:12.676 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bczn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b22ce9a-63bb-41fc-bc0b-217e50145ea4", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5bczn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide66a670e1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:12.696940 containerd[1438]: 2025-09-09 00:37:12.676 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.696940 containerd[1438]: 2025-09-09 00:37:12.676 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide66a670e1a ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.696940 containerd[1438]: 2025-09-09 00:37:12.679 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.696940 containerd[1438]: 2025-09-09 00:37:12.679 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bczn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b22ce9a-63bb-41fc-bc0b-217e50145ea4", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13", Pod:"csi-node-driver-5bczn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide66a670e1a", MAC:"d2:ae:b6:d7:19:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:12.696940 containerd[1438]: 2025-09-09 00:37:12.692 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13" Namespace="calico-system" Pod="csi-node-driver-5bczn" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:12.702361 containerd[1438]: time="2025-09-09T00:37:12.702214979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d549548b-pkrjv,Uid:a376016b-b8d6-4e3c-a75d-da673ae09400,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799\"" Sep 9 00:37:12.727058 containerd[1438]: time="2025-09-09T00:37:12.726800328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:12.727058 containerd[1438]: time="2025-09-09T00:37:12.726964078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:12.727058 containerd[1438]: time="2025-09-09T00:37:12.726986442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:12.728003 containerd[1438]: time="2025-09-09T00:37:12.727696732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:12.746753 systemd[1]: Started cri-containerd-c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13.scope - libcontainer container c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13. Sep 9 00:37:12.759218 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:12.768794 containerd[1438]: time="2025-09-09T00:37:12.768725537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bczn,Uid:5b22ce9a-63bb-41fc-bc0b-217e50145ea4,Namespace:calico-system,Attempt:1,} returns sandbox id \"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13\"" Sep 9 00:37:13.091526 containerd[1438]: time="2025-09-09T00:37:13.091477814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:13.092034 containerd[1438]: time="2025-09-09T00:37:13.092001188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 00:37:13.093075 containerd[1438]: time="2025-09-09T00:37:13.093047055Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:13.095224 containerd[1438]: time="2025-09-09T00:37:13.095181798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:13.096627 containerd[1438]: time="2025-09-09T00:37:13.096597611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.292215438s" Sep 9 00:37:13.096675 containerd[1438]: time="2025-09-09T00:37:13.096632497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 00:37:13.097806 containerd[1438]: time="2025-09-09T00:37:13.097414197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:37:13.101689 containerd[1438]: time="2025-09-09T00:37:13.101647755Z" level=info msg="CreateContainer within sandbox \"9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:37:13.113358 containerd[1438]: time="2025-09-09T00:37:13.113306043Z" level=info msg="CreateContainer within sandbox \"9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"51f342c026ed100746da430f3459d99e4557f3ef39c39635684925ba13044a11\"" Sep 9 00:37:13.114171 containerd[1438]: time="2025-09-09T00:37:13.114134271Z" level=info msg="StartContainer for \"51f342c026ed100746da430f3459d99e4557f3ef39c39635684925ba13044a11\"" Sep 9 00:37:13.144725 systemd[1]: Started cri-containerd-51f342c026ed100746da430f3459d99e4557f3ef39c39635684925ba13044a11.scope - libcontainer container 51f342c026ed100746da430f3459d99e4557f3ef39c39635684925ba13044a11. Sep 9 00:37:13.186092 containerd[1438]: time="2025-09-09T00:37:13.186035267Z" level=info msg="StartContainer for \"51f342c026ed100746da430f3459d99e4557f3ef39c39635684925ba13044a11\" returns successfully" Sep 9 00:37:13.268245 containerd[1438]: time="2025-09-09T00:37:13.268194859Z" level=info msg="StopPodSandbox for \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\"" Sep 9 00:37:13.323741 systemd-networkd[1372]: calicb3c862558b: Gained IPv6LL Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.312 [INFO][5108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.312 [INFO][5108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" iface="eth0" netns="/var/run/netns/cni-edf3f801-af53-8bfb-2dd9-19757f2ea5d7" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.312 [INFO][5108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" iface="eth0" netns="/var/run/netns/cni-edf3f801-af53-8bfb-2dd9-19757f2ea5d7" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.313 [INFO][5108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" iface="eth0" netns="/var/run/netns/cni-edf3f801-af53-8bfb-2dd9-19757f2ea5d7" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.313 [INFO][5108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.313 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.331 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.331 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.331 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.342 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.342 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.343 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:13.347404 containerd[1438]: 2025-09-09 00:37:13.345 [INFO][5108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:13.347817 containerd[1438]: time="2025-09-09T00:37:13.347441090Z" level=info msg="TearDown network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\" successfully" Sep 9 00:37:13.347817 containerd[1438]: time="2025-09-09T00:37:13.347467495Z" level=info msg="StopPodSandbox for \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\" returns successfully" Sep 9 00:37:13.348034 containerd[1438]: time="2025-09-09T00:37:13.348011752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4886c776-cnp84,Uid:4ddd1c7a-7179-4302-aedf-d6664fe29be7,Namespace:calico-system,Attempt:1,}" Sep 9 00:37:13.384584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144888848.mount: Deactivated successfully. Sep 9 00:37:13.384686 systemd[1]: run-netns-cni\x2dedf3f801\x2daf53\x2d8bfb\x2d2dd9\x2d19757f2ea5d7.mount: Deactivated successfully. Sep 9 00:37:13.455263 systemd-networkd[1372]: cali2ee15e7f08c: Link UP Sep 9 00:37:13.456175 systemd-networkd[1372]: cali2ee15e7f08c: Gained carrier Sep 9 00:37:13.470725 kubelet[2465]: E0909 00:37:13.470550 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:13.470725 kubelet[2465]: E0909 00:37:13.470569 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.374 [INFO][5125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.391 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0 calico-kube-controllers-7d4886c776- calico-system 4ddd1c7a-7179-4302-aedf-d6664fe29be7 1007 0 2025-09-09 00:36:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d4886c776 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d4886c776-cnp84 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2ee15e7f08c [] [] }} ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.391 [INFO][5125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.414 [INFO][5139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" HandleID="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.414 [INFO][5139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" HandleID="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001377a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d4886c776-cnp84", "timestamp":"2025-09-09 00:37:13.414252454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.414 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.414 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.414 [INFO][5139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.425 [INFO][5139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.429 [INFO][5139] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.434 [INFO][5139] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.435 [INFO][5139] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.438 [INFO][5139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.438 [INFO][5139] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.439 [INFO][5139] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829 Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.444 [INFO][5139] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.451 [INFO][5139] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.451 [INFO][5139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" host="localhost" Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.451 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:13.473191 containerd[1438]: 2025-09-09 00:37:13.451 [INFO][5139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" HandleID="k8s-pod-network.31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.474063 containerd[1438]: 2025-09-09 00:37:13.453 [INFO][5125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0", GenerateName:"calico-kube-controllers-7d4886c776-", Namespace:"calico-system", SelfLink:"", UID:"4ddd1c7a-7179-4302-aedf-d6664fe29be7", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4886c776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d4886c776-cnp84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ee15e7f08c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:13.474063 containerd[1438]: 2025-09-09 00:37:13.453 [INFO][5125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.474063 containerd[1438]: 2025-09-09 00:37:13.453 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ee15e7f08c ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.474063 containerd[1438]: 2025-09-09 00:37:13.455 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.474063 containerd[1438]: 2025-09-09 00:37:13.457 [INFO][5125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0", GenerateName:"calico-kube-controllers-7d4886c776-", Namespace:"calico-system", SelfLink:"", UID:"4ddd1c7a-7179-4302-aedf-d6664fe29be7", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4886c776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829", Pod:"calico-kube-controllers-7d4886c776-cnp84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ee15e7f08c", MAC:"1e:d3:ca:e2:3e:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:13.474063 containerd[1438]: 2025-09-09 00:37:13.469 [INFO][5125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829" Namespace="calico-system" Pod="calico-kube-controllers-7d4886c776-cnp84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:13.492874 containerd[1438]: time="2025-09-09T00:37:13.492783077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:13.492874 containerd[1438]: time="2025-09-09T00:37:13.492836206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:13.492874 containerd[1438]: time="2025-09-09T00:37:13.492847408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:13.493459 containerd[1438]: time="2025-09-09T00:37:13.492925102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:13.520775 systemd[1]: Started cri-containerd-31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829.scope - libcontainer container 31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829. Sep 9 00:37:13.534114 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:13.557564 containerd[1438]: time="2025-09-09T00:37:13.557513388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4886c776-cnp84,Uid:4ddd1c7a-7179-4302-aedf-d6664fe29be7,Namespace:calico-system,Attempt:1,} returns sandbox id \"31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829\"" Sep 9 00:37:14.285754 systemd-networkd[1372]: cali8e722e88a77: Gained IPv6LL Sep 9 00:37:14.473639 kubelet[2465]: E0909 00:37:14.473573 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:14.475690 systemd-networkd[1372]: calide66a670e1a: Gained IPv6LL Sep 9 00:37:14.481740 kubelet[2465]: I0909 00:37:14.481695 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:14.973822 containerd[1438]: time="2025-09-09T00:37:14.973774422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:14.974589 containerd[1438]: time="2025-09-09T00:37:14.974560320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 00:37:14.975290 containerd[1438]: time="2025-09-09T00:37:14.975264003Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:14.982352 containerd[1438]: time="2025-09-09T00:37:14.982305635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:14.983048 containerd[1438]: time="2025-09-09T00:37:14.983008558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.885563756s" Sep 9 00:37:14.983048 containerd[1438]: time="2025-09-09T00:37:14.983043604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 00:37:14.984109 containerd[1438]: time="2025-09-09T00:37:14.984086467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:37:14.989151 containerd[1438]: time="2025-09-09T00:37:14.989117587Z" level=info msg="CreateContainer within sandbox \"fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:37:15.001040 containerd[1438]: time="2025-09-09T00:37:15.000960020Z" level=info msg="CreateContainer within sandbox \"fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5387cbcb5e26cf2ca3f56644a9d876618821bc22ef5f97b20e6be3be683ac82\"" Sep 9 00:37:15.001726 containerd[1438]: time="2025-09-09T00:37:15.001699829Z" level=info msg="StartContainer for \"d5387cbcb5e26cf2ca3f56644a9d876618821bc22ef5f97b20e6be3be683ac82\"" Sep 9 00:37:15.027722 systemd[1]: Started cri-containerd-d5387cbcb5e26cf2ca3f56644a9d876618821bc22ef5f97b20e6be3be683ac82.scope - libcontainer container d5387cbcb5e26cf2ca3f56644a9d876618821bc22ef5f97b20e6be3be683ac82. Sep 9 00:37:15.067083 containerd[1438]: time="2025-09-09T00:37:15.067040737Z" level=info msg="StartContainer for \"d5387cbcb5e26cf2ca3f56644a9d876618821bc22ef5f97b20e6be3be683ac82\" returns successfully" Sep 9 00:37:15.263716 containerd[1438]: time="2025-09-09T00:37:15.263583109Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:15.266992 containerd[1438]: time="2025-09-09T00:37:15.266301175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:37:15.268240 containerd[1438]: time="2025-09-09T00:37:15.268197779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 284.082908ms" Sep 9 00:37:15.268306 containerd[1438]: time="2025-09-09T00:37:15.268242787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 00:37:15.269563 containerd[1438]: time="2025-09-09T00:37:15.269337495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:37:15.274919 containerd[1438]: time="2025-09-09T00:37:15.274788668Z" level=info msg="CreateContainer within sandbox \"c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:37:15.290914 containerd[1438]: time="2025-09-09T00:37:15.290617178Z" level=info msg="CreateContainer within sandbox \"c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a3bf46914d32ae453d810b7a05e7f8f200af3df15ac60c389cfbcb7d61016d42\"" Sep 9 00:37:15.291319 containerd[1438]: time="2025-09-09T00:37:15.291291414Z" level=info msg="StartContainer for \"a3bf46914d32ae453d810b7a05e7f8f200af3df15ac60c389cfbcb7d61016d42\"" Sep 9 00:37:15.320749 systemd[1]: Started cri-containerd-a3bf46914d32ae453d810b7a05e7f8f200af3df15ac60c389cfbcb7d61016d42.scope - libcontainer container a3bf46914d32ae453d810b7a05e7f8f200af3df15ac60c389cfbcb7d61016d42. Sep 9 00:37:15.496184 containerd[1438]: time="2025-09-09T00:37:15.496133407Z" level=info msg="StartContainer for \"a3bf46914d32ae453d810b7a05e7f8f200af3df15ac60c389cfbcb7d61016d42\" returns successfully" Sep 9 00:37:15.500675 systemd-networkd[1372]: cali2ee15e7f08c: Gained IPv6LL Sep 9 00:37:15.514148 kubelet[2465]: I0909 00:37:15.514031 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-qpjc4" podStartSLOduration=24.211906972 podStartE2EDuration="26.514014789s" podCreationTimestamp="2025-09-09 00:36:49 +0000 UTC" firstStartedPulling="2025-09-09 00:37:10.795171036 +0000 UTC m=+42.618370520" lastFinishedPulling="2025-09-09 00:37:13.097278853 +0000 UTC m=+44.920478337" observedRunningTime="2025-09-09 00:37:13.48939359 +0000 UTC m=+45.312593074" watchObservedRunningTime="2025-09-09 00:37:15.514014789 +0000 UTC m=+47.337214273" Sep 9 00:37:15.514148 kubelet[2465]: I0909 00:37:15.514117 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78d549548b-pkrjv" podStartSLOduration=26.9492335 podStartE2EDuration="29.514112086s" podCreationTimestamp="2025-09-09 00:36:46 +0000 UTC" firstStartedPulling="2025-09-09 00:37:12.70429176 +0000 UTC m=+44.527491244" lastFinishedPulling="2025-09-09 00:37:15.269170266 +0000 UTC m=+47.092369830" observedRunningTime="2025-09-09 00:37:15.513112795 +0000 UTC m=+47.336312279" watchObservedRunningTime="2025-09-09 00:37:15.514112086 +0000 UTC m=+47.337311570" Sep 9 00:37:16.343855 containerd[1438]: time="2025-09-09T00:37:16.343781766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:16.345345 containerd[1438]: time="2025-09-09T00:37:16.345153836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 00:37:16.346408 containerd[1438]: time="2025-09-09T00:37:16.346209133Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:16.349386 containerd[1438]: time="2025-09-09T00:37:16.349318294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:16.351179 containerd[1438]: time="2025-09-09T00:37:16.350934565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.081568865s" Sep 9 00:37:16.351179 containerd[1438]: time="2025-09-09T00:37:16.350968491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 00:37:16.354733 containerd[1438]: time="2025-09-09T00:37:16.354684554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:37:16.358455 containerd[1438]: time="2025-09-09T00:37:16.358422020Z" level=info msg="CreateContainer within sandbox \"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:37:16.383134 containerd[1438]: time="2025-09-09T00:37:16.383088236Z" level=info msg="CreateContainer within sandbox \"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bc570f2cc622a4ca4b62d65ec44e1c28754424a59b929db3d8114ae7634bfe0a\"" Sep 9 00:37:16.383844 containerd[1438]: time="2025-09-09T00:37:16.383760348Z" level=info msg="StartContainer for \"bc570f2cc622a4ca4b62d65ec44e1c28754424a59b929db3d8114ae7634bfe0a\"" Sep 9 00:37:16.422783 systemd[1]: Started cri-containerd-bc570f2cc622a4ca4b62d65ec44e1c28754424a59b929db3d8114ae7634bfe0a.scope - libcontainer container bc570f2cc622a4ca4b62d65ec44e1c28754424a59b929db3d8114ae7634bfe0a. Sep 9 00:37:16.460954 containerd[1438]: time="2025-09-09T00:37:16.460911564Z" level=info msg="StartContainer for \"bc570f2cc622a4ca4b62d65ec44e1c28754424a59b929db3d8114ae7634bfe0a\" returns successfully" Sep 9 00:37:16.511966 kubelet[2465]: I0909 00:37:16.511585 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:16.514640 kubelet[2465]: I0909 00:37:16.514606 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:16.760335 kubelet[2465]: I0909 00:37:16.759433 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:16.760335 kubelet[2465]: E0909 00:37:16.759825 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:16.811600 kubelet[2465]: I0909 00:37:16.811513 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78d549548b-j9zf9" podStartSLOduration=26.685022117 podStartE2EDuration="30.811498704s" podCreationTimestamp="2025-09-09 00:36:46 +0000 UTC" firstStartedPulling="2025-09-09 00:37:10.857458013 +0000 UTC m=+42.680657457" lastFinishedPulling="2025-09-09 00:37:14.9839346 +0000 UTC m=+46.807134044" observedRunningTime="2025-09-09 00:37:15.527109471 +0000 UTC m=+47.350308955" watchObservedRunningTime="2025-09-09 00:37:16.811498704 +0000 UTC m=+48.634698188" Sep 9 00:37:16.822824 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:33974.service - OpenSSH per-connection server daemon (10.0.0.1:33974). Sep 9 00:37:16.864560 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 33974 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:16.866787 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:16.872702 systemd-logind[1411]: New session 8 of user core. Sep 9 00:37:16.877737 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:37:17.072594 kernel: bpftool[5439]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:37:17.130858 sshd[5407]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:17.135389 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:33974.service: Deactivated successfully. Sep 9 00:37:17.137048 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:37:17.138731 systemd-logind[1411]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:37:17.139495 systemd-logind[1411]: Removed session 8. Sep 9 00:37:17.311952 systemd-networkd[1372]: vxlan.calico: Link UP Sep 9 00:37:17.311961 systemd-networkd[1372]: vxlan.calico: Gained carrier Sep 9 00:37:17.517279 kubelet[2465]: E0909 00:37:17.517237 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:18.817005 containerd[1438]: time="2025-09-09T00:37:18.816959327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:18.818122 containerd[1438]: time="2025-09-09T00:37:18.817662080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 00:37:18.818811 containerd[1438]: time="2025-09-09T00:37:18.818762417Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:18.821022 containerd[1438]: time="2025-09-09T00:37:18.820974774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:18.821696 containerd[1438]: time="2025-09-09T00:37:18.821663805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.466932724s" Sep 9 00:37:18.821696 containerd[1438]: time="2025-09-09T00:37:18.821696450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 00:37:18.823745 containerd[1438]: time="2025-09-09T00:37:18.823404086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:37:18.835720 containerd[1438]: time="2025-09-09T00:37:18.835689586Z" level=info msg="CreateContainer within sandbox \"31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:37:18.847845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793312906.mount: Deactivated successfully. Sep 9 00:37:18.849668 containerd[1438]: time="2025-09-09T00:37:18.849635314Z" level=info msg="CreateContainer within sandbox \"31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"587b5bcef6e5ab8fedf91b934b8702f7fec701cbf19123a84feea5b3a584583c\"" Sep 9 00:37:18.850071 containerd[1438]: time="2025-09-09T00:37:18.850051221Z" level=info msg="StartContainer for \"587b5bcef6e5ab8fedf91b934b8702f7fec701cbf19123a84feea5b3a584583c\"" Sep 9 00:37:18.882705 systemd[1]: Started cri-containerd-587b5bcef6e5ab8fedf91b934b8702f7fec701cbf19123a84feea5b3a584583c.scope - libcontainer container 587b5bcef6e5ab8fedf91b934b8702f7fec701cbf19123a84feea5b3a584583c. Sep 9 00:37:18.952373 containerd[1438]: time="2025-09-09T00:37:18.952317706Z" level=info msg="StartContainer for \"587b5bcef6e5ab8fedf91b934b8702f7fec701cbf19123a84feea5b3a584583c\" returns successfully" Sep 9 00:37:19.339741 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Sep 9 00:37:19.592439 kubelet[2465]: I0909 00:37:19.592074 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d4886c776-cnp84" podStartSLOduration=24.328243675 podStartE2EDuration="29.592055853s" podCreationTimestamp="2025-09-09 00:36:50 +0000 UTC" firstStartedPulling="2025-09-09 00:37:13.55886755 +0000 UTC m=+45.382067035" lastFinishedPulling="2025-09-09 00:37:18.822679729 +0000 UTC m=+50.645879213" observedRunningTime="2025-09-09 00:37:19.536573912 +0000 UTC m=+51.359773397" watchObservedRunningTime="2025-09-09 00:37:19.592055853 +0000 UTC m=+51.415255297" Sep 9 00:37:19.986414 containerd[1438]: time="2025-09-09T00:37:19.986343854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:19.987973 containerd[1438]: time="2025-09-09T00:37:19.987829169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 00:37:19.987973 containerd[1438]: time="2025-09-09T00:37:19.987916743Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:19.990833 containerd[1438]: time="2025-09-09T00:37:19.990785997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:19.991289 containerd[1438]: time="2025-09-09T00:37:19.991244149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.167807659s" Sep 9 00:37:19.991341 containerd[1438]: time="2025-09-09T00:37:19.991291477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 00:37:19.995979 containerd[1438]: time="2025-09-09T00:37:19.995947494Z" level=info msg="CreateContainer within sandbox \"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:37:20.013501 containerd[1438]: time="2025-09-09T00:37:20.013372619Z" level=info msg="CreateContainer within sandbox \"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7b51995f37239ef78a61737f482dfd9fd271b67682df2a4c49d3d1fc9fd9a2ef\"" Sep 9 00:37:20.013997 containerd[1438]: time="2025-09-09T00:37:20.013958990Z" level=info msg="StartContainer for \"7b51995f37239ef78a61737f482dfd9fd271b67682df2a4c49d3d1fc9fd9a2ef\"" Sep 9 00:37:20.049721 systemd[1]: Started cri-containerd-7b51995f37239ef78a61737f482dfd9fd271b67682df2a4c49d3d1fc9fd9a2ef.scope - libcontainer container 7b51995f37239ef78a61737f482dfd9fd271b67682df2a4c49d3d1fc9fd9a2ef. Sep 9 00:37:20.090494 kubelet[2465]: I0909 00:37:20.090456 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:20.098745 containerd[1438]: time="2025-09-09T00:37:20.097783466Z" level=info msg="StartContainer for \"7b51995f37239ef78a61737f482dfd9fd271b67682df2a4c49d3d1fc9fd9a2ef\" returns successfully" Sep 9 00:37:20.348641 kubelet[2465]: I0909 00:37:20.348518 2465 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:37:20.356868 kubelet[2465]: I0909 00:37:20.356817 2465 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:37:20.541924 kubelet[2465]: I0909 00:37:20.541851 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5bczn" podStartSLOduration=23.319718608 podStartE2EDuration="30.541826279s" podCreationTimestamp="2025-09-09 00:36:50 +0000 UTC" firstStartedPulling="2025-09-09 00:37:12.770169562 +0000 UTC m=+44.593369006" lastFinishedPulling="2025-09-09 00:37:19.992277193 +0000 UTC m=+51.815476677" observedRunningTime="2025-09-09 00:37:20.537809895 +0000 UTC m=+52.361009379" watchObservedRunningTime="2025-09-09 00:37:20.541826279 +0000 UTC m=+52.365025723" Sep 9 00:37:22.146260 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:51102.service - OpenSSH per-connection server daemon (10.0.0.1:51102). Sep 9 00:37:22.212471 sshd[5762]: Accepted publickey for core from 10.0.0.1 port 51102 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:22.214447 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:22.219266 systemd-logind[1411]: New session 9 of user core. Sep 9 00:37:22.229761 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:37:22.567720 sshd[5762]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:22.571494 systemd-logind[1411]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:37:22.571739 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:51102.service: Deactivated successfully. Sep 9 00:37:22.575405 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:37:22.578614 systemd-logind[1411]: Removed session 9. Sep 9 00:37:23.455567 kubelet[2465]: I0909 00:37:23.455430 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:27.583094 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:51106.service - OpenSSH per-connection server daemon (10.0.0.1:51106). Sep 9 00:37:27.623897 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 51106 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:27.626213 sshd[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:27.633599 systemd-logind[1411]: New session 10 of user core. Sep 9 00:37:27.642048 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:37:27.849157 sshd[5787]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:27.860114 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:51106.service: Deactivated successfully. Sep 9 00:37:27.861997 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:37:27.863939 systemd-logind[1411]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:37:27.865914 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:51112.service - OpenSSH per-connection server daemon (10.0.0.1:51112). Sep 9 00:37:27.867884 systemd-logind[1411]: Removed session 10. Sep 9 00:37:27.915176 sshd[5810]: Accepted publickey for core from 10.0.0.1 port 51112 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:27.916794 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:27.922689 systemd-logind[1411]: New session 11 of user core. Sep 9 00:37:27.930727 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:37:28.144949 sshd[5810]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:28.157720 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:51112.service: Deactivated successfully. Sep 9 00:37:28.161202 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:37:28.162913 systemd-logind[1411]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:37:28.169311 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:51122.service - OpenSSH per-connection server daemon (10.0.0.1:51122). Sep 9 00:37:28.171785 systemd-logind[1411]: Removed session 11. Sep 9 00:37:28.221272 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 51122 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:28.222796 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:28.227625 systemd-logind[1411]: New session 12 of user core. Sep 9 00:37:28.236727 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:37:28.273822 containerd[1438]: time="2025-09-09T00:37:28.273693005Z" level=info msg="StopPodSandbox for \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\"" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.342 [WARNING][5844] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af9f3d40-f917-4ead-8678-9e2ee2432935", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658", Pod:"coredns-674b8bbfcf-dzzmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb3c862558b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.343 [INFO][5844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.343 [INFO][5844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" iface="eth0" netns="" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.343 [INFO][5844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.343 [INFO][5844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.370 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.370 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.370 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.379 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.379 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.381 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.386594 containerd[1438]: 2025-09-09 00:37:28.384 [INFO][5844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.386980 containerd[1438]: time="2025-09-09T00:37:28.386585843Z" level=info msg="TearDown network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\" successfully" Sep 9 00:37:28.386980 containerd[1438]: time="2025-09-09T00:37:28.386612367Z" level=info msg="StopPodSandbox for \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\" returns successfully" Sep 9 00:37:28.387851 containerd[1438]: time="2025-09-09T00:37:28.387605385Z" level=info msg="RemovePodSandbox for \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\"" Sep 9 00:37:28.390558 containerd[1438]: time="2025-09-09T00:37:28.390282956Z" level=info msg="Forcibly stopping sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\"" Sep 9 00:37:28.431340 sshd[5828]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:28.438167 systemd-logind[1411]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:37:28.438662 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:51122.service: Deactivated successfully. Sep 9 00:37:28.441979 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:37:28.442835 systemd-logind[1411]: Removed session 12. Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.429 [WARNING][5880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"af9f3d40-f917-4ead-8678-9e2ee2432935", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c52f12cd45e59e5796acbb9d6a8277bad6fc1593928693cabfeb2de42fc2658", Pod:"coredns-674b8bbfcf-dzzmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb3c862558b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.430 [INFO][5880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.430 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" iface="eth0" netns="" Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.430 [INFO][5880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.430 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.451 [INFO][5889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.451 [INFO][5889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.451 [INFO][5889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.460 [WARNING][5889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.460 [INFO][5889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" HandleID="k8s-pod-network.2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Workload="localhost-k8s-coredns--674b8bbfcf--dzzmz-eth0" Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.462 [INFO][5889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.466686 containerd[1438]: 2025-09-09 00:37:28.464 [INFO][5880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d" Sep 9 00:37:28.467287 containerd[1438]: time="2025-09-09T00:37:28.466717771Z" level=info msg="TearDown network for sandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\" successfully" Sep 9 00:37:28.484570 containerd[1438]: time="2025-09-09T00:37:28.484504161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:28.484663 containerd[1438]: time="2025-09-09T00:37:28.484599615Z" level=info msg="RemovePodSandbox \"2850813328dfa1eea3f68e377511bfe93335df8fb2fe9da738c36ee2b528601d\" returns successfully" Sep 9 00:37:28.485072 containerd[1438]: time="2025-09-09T00:37:28.485044036Z" level=info msg="StopPodSandbox for \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\"" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.537 [WARNING][5908] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" WorkloadEndpoint="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.537 [INFO][5908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.537 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" iface="eth0" netns="" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.537 [INFO][5908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.537 [INFO][5908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.560 [INFO][5916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.560 [INFO][5916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.560 [INFO][5916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.569 [WARNING][5916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.569 [INFO][5916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.570 [INFO][5916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.575580 containerd[1438]: 2025-09-09 00:37:28.572 [INFO][5908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.575580 containerd[1438]: time="2025-09-09T00:37:28.575044975Z" level=info msg="TearDown network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\" successfully" Sep 9 00:37:28.575580 containerd[1438]: time="2025-09-09T00:37:28.575068898Z" level=info msg="StopPodSandbox for \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\" returns successfully" Sep 9 00:37:28.575580 containerd[1438]: time="2025-09-09T00:37:28.575495838Z" level=info msg="RemovePodSandbox for \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\"" Sep 9 00:37:28.575580 containerd[1438]: time="2025-09-09T00:37:28.575528202Z" level=info msg="Forcibly stopping sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\"" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.623 [WARNING][5934] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" WorkloadEndpoint="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.623 [INFO][5934] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.623 [INFO][5934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" iface="eth0" netns="" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.623 [INFO][5934] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.623 [INFO][5934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.640 [INFO][5942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.641 [INFO][5942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.641 [INFO][5942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.650 [WARNING][5942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.650 [INFO][5942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" HandleID="k8s-pod-network.b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Workload="localhost-k8s-whisker--6f8ff9d7b9--lggcj-eth0" Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.652 [INFO][5942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.655573 containerd[1438]: 2025-09-09 00:37:28.653 [INFO][5934] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676" Sep 9 00:37:28.655573 containerd[1438]: time="2025-09-09T00:37:28.655466303Z" level=info msg="TearDown network for sandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\" successfully" Sep 9 00:37:28.662946 containerd[1438]: time="2025-09-09T00:37:28.662776238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:28.662946 containerd[1438]: time="2025-09-09T00:37:28.662858330Z" level=info msg="RemovePodSandbox \"b83b5a9250660c3985c7a95a76e880abc70940f17b72e96230402a687c37b676\" returns successfully" Sep 9 00:37:28.663354 containerd[1438]: time="2025-09-09T00:37:28.663309312Z" level=info msg="StopPodSandbox for \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\"" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.703 [WARNING][5959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qpjc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bbe52a64-481a-4444-8cd9-6a232570debc", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8", Pod:"goldmane-54d579b49d-qpjc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81508c6f846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.703 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.703 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" iface="eth0" netns="" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.703 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.703 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.721 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.721 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.721 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.730 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.730 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.732 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.735714 containerd[1438]: 2025-09-09 00:37:28.734 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.736105 containerd[1438]: time="2025-09-09T00:37:28.735756653Z" level=info msg="TearDown network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\" successfully" Sep 9 00:37:28.736105 containerd[1438]: time="2025-09-09T00:37:28.735780617Z" level=info msg="StopPodSandbox for \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\" returns successfully" Sep 9 00:37:28.736306 containerd[1438]: time="2025-09-09T00:37:28.736284207Z" level=info msg="RemovePodSandbox for \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\"" Sep 9 00:37:28.736339 containerd[1438]: time="2025-09-09T00:37:28.736313731Z" level=info msg="Forcibly stopping sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\"" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.772 [WARNING][5986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qpjc4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bbe52a64-481a-4444-8cd9-6a232570debc", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d6d01066db713d2378bd165cfcc572fd861811f9eaefe7cef3c48f12d63fde8", Pod:"goldmane-54d579b49d-qpjc4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81508c6f846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.772 [INFO][5986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.772 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" iface="eth0" netns="" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.772 [INFO][5986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.772 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.791 [INFO][5994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.791 [INFO][5994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.791 [INFO][5994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.800 [WARNING][5994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.800 [INFO][5994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" HandleID="k8s-pod-network.5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Workload="localhost-k8s-goldmane--54d579b49d--qpjc4-eth0" Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.801 [INFO][5994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.804839 containerd[1438]: 2025-09-09 00:37:28.803 [INFO][5986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71" Sep 9 00:37:28.805334 containerd[1438]: time="2025-09-09T00:37:28.804879133Z" level=info msg="TearDown network for sandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\" successfully" Sep 9 00:37:28.807926 containerd[1438]: time="2025-09-09T00:37:28.807897392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:28.807990 containerd[1438]: time="2025-09-09T00:37:28.807974322Z" level=info msg="RemovePodSandbox \"5bec8972f8a0570a83f244ffb25924d0170b6bf53eabb5b1a0162ddb8ab83c71\" returns successfully" Sep 9 00:37:28.808564 containerd[1438]: time="2025-09-09T00:37:28.808527199Z" level=info msg="StopPodSandbox for \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\"" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.841 [WARNING][6011] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a376016b-b8d6-4e3c-a75d-da673ae09400", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799", Pod:"calico-apiserver-78d549548b-pkrjv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e722e88a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.841 [INFO][6011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.841 [INFO][6011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" iface="eth0" netns="" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.841 [INFO][6011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.841 [INFO][6011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.858 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.859 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.859 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.867 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.867 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.869 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.871852 containerd[1438]: 2025-09-09 00:37:28.870 [INFO][6011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.872376 containerd[1438]: time="2025-09-09T00:37:28.871889479Z" level=info msg="TearDown network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\" successfully" Sep 9 00:37:28.872376 containerd[1438]: time="2025-09-09T00:37:28.871913082Z" level=info msg="StopPodSandbox for \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\" returns successfully" Sep 9 00:37:28.872624 containerd[1438]: time="2025-09-09T00:37:28.872488002Z" level=info msg="RemovePodSandbox for \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\"" Sep 9 00:37:28.872624 containerd[1438]: time="2025-09-09T00:37:28.872549690Z" level=info msg="Forcibly stopping sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\"" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.905 [WARNING][6039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a376016b-b8d6-4e3c-a75d-da673ae09400", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9a646db11af984882d48151a8fc6a26ac852859ff8c721417a08b72e70d7799", Pod:"calico-apiserver-78d549548b-pkrjv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e722e88a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.905 [INFO][6039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.905 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" iface="eth0" netns="" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.905 [INFO][6039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.906 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.923 [INFO][6048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.923 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.923 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.932 [WARNING][6048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.932 [INFO][6048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" HandleID="k8s-pod-network.15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Workload="localhost-k8s-calico--apiserver--78d549548b--pkrjv-eth0" Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.934 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:28.937793 containerd[1438]: 2025-09-09 00:37:28.936 [INFO][6039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e" Sep 9 00:37:28.938175 containerd[1438]: time="2025-09-09T00:37:28.937831916Z" level=info msg="TearDown network for sandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\" successfully" Sep 9 00:37:28.951492 containerd[1438]: time="2025-09-09T00:37:28.951437366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:28.951647 containerd[1438]: time="2025-09-09T00:37:28.951506095Z" level=info msg="RemovePodSandbox \"15d45e3e87cc138b714792e03b16fff44ed2cabb7fffb5c2bd0df3b36b106a1e\" returns successfully" Sep 9 00:37:28.952323 containerd[1438]: time="2025-09-09T00:37:28.951965279Z" level=info msg="StopPodSandbox for \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\"" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:28.982 [WARNING][6065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f48cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f85e14c3-579a-4825-b5fe-c349bc6999fc", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95", Pod:"coredns-674b8bbfcf-f48cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fd083c7c6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:28.982 [INFO][6065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:28.982 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" iface="eth0" netns="" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:28.982 [INFO][6065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:28.982 [INFO][6065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:29.011 [INFO][6075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:29.012 [INFO][6075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:29.012 [INFO][6075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:29.022 [WARNING][6075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:29.023 [INFO][6075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:29.024 [INFO][6075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.028155 containerd[1438]: 2025-09-09 00:37:29.026 [INFO][6065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.028749 containerd[1438]: time="2025-09-09T00:37:29.028147178Z" level=info msg="TearDown network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\" successfully" Sep 9 00:37:29.028749 containerd[1438]: time="2025-09-09T00:37:29.028174621Z" level=info msg="StopPodSandbox for \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\" returns successfully" Sep 9 00:37:29.029225 containerd[1438]: time="2025-09-09T00:37:29.028930365Z" level=info msg="RemovePodSandbox for \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\"" Sep 9 00:37:29.029225 containerd[1438]: time="2025-09-09T00:37:29.028981572Z" level=info msg="Forcibly stopping sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\"" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.066 [WARNING][6093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f48cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f85e14c3-579a-4825-b5fe-c349bc6999fc", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f54de673564a84e2d627872d69faf7301c946356a65dfd03639299fa7fd72e95", Pod:"coredns-674b8bbfcf-f48cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fd083c7c6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.067 [INFO][6093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.067 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" iface="eth0" netns="" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.067 [INFO][6093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.067 [INFO][6093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.094 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.094 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.094 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.103 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.103 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" HandleID="k8s-pod-network.0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Workload="localhost-k8s-coredns--674b8bbfcf--f48cr-eth0" Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.105 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.108184 containerd[1438]: 2025-09-09 00:37:29.106 [INFO][6093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1" Sep 9 00:37:29.108630 containerd[1438]: time="2025-09-09T00:37:29.108222735Z" level=info msg="TearDown network for sandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\" successfully" Sep 9 00:37:29.124756 containerd[1438]: time="2025-09-09T00:37:29.124524773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:29.124756 containerd[1438]: time="2025-09-09T00:37:29.124685275Z" level=info msg="RemovePodSandbox \"0ec6102a494ba55dcf1267bf3109afe755f7358b20ee66d0091afb94d268e0a1\" returns successfully" Sep 9 00:37:29.125509 containerd[1438]: time="2025-09-09T00:37:29.125226990Z" level=info msg="StopPodSandbox for \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\"" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.159 [WARNING][6119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0", GenerateName:"calico-kube-controllers-7d4886c776-", Namespace:"calico-system", SelfLink:"", UID:"4ddd1c7a-7179-4302-aedf-d6664fe29be7", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4886c776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829", Pod:"calico-kube-controllers-7d4886c776-cnp84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ee15e7f08c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.159 [INFO][6119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.159 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" iface="eth0" netns="" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.159 [INFO][6119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.159 [INFO][6119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.181 [INFO][6128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.181 [INFO][6128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.181 [INFO][6128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.192 [WARNING][6128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.192 [INFO][6128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.195 [INFO][6128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.202559 containerd[1438]: 2025-09-09 00:37:29.199 [INFO][6119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.202559 containerd[1438]: time="2025-09-09T00:37:29.202509803Z" level=info msg="TearDown network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\" successfully" Sep 9 00:37:29.202559 containerd[1438]: time="2025-09-09T00:37:29.202561451Z" level=info msg="StopPodSandbox for \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\" returns successfully" Sep 9 00:37:29.203603 containerd[1438]: time="2025-09-09T00:37:29.203180576Z" level=info msg="RemovePodSandbox for \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\"" Sep 9 00:37:29.203603 containerd[1438]: time="2025-09-09T00:37:29.203211540Z" level=info msg="Forcibly stopping sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\"" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.239 [WARNING][6146] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0", GenerateName:"calico-kube-controllers-7d4886c776-", Namespace:"calico-system", SelfLink:"", UID:"4ddd1c7a-7179-4302-aedf-d6664fe29be7", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4886c776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31b65dca85158510e5e89eb22aa0762c94b8b2db7b341d09ac574452ebaec829", Pod:"calico-kube-controllers-7d4886c776-cnp84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ee15e7f08c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.239 [INFO][6146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.239 [INFO][6146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" iface="eth0" netns="" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.239 [INFO][6146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.239 [INFO][6146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.256 [INFO][6155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.256 [INFO][6155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.256 [INFO][6155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.266 [WARNING][6155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.266 [INFO][6155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" HandleID="k8s-pod-network.97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Workload="localhost-k8s-calico--kube--controllers--7d4886c776--cnp84-eth0" Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.267 [INFO][6155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.271551 containerd[1438]: 2025-09-09 00:37:29.270 [INFO][6146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb" Sep 9 00:37:29.271993 containerd[1438]: time="2025-09-09T00:37:29.271604132Z" level=info msg="TearDown network for sandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\" successfully" Sep 9 00:37:29.276182 containerd[1438]: time="2025-09-09T00:37:29.276122273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:29.276995 containerd[1438]: time="2025-09-09T00:37:29.276191923Z" level=info msg="RemovePodSandbox \"97036a0f07c2e797ddf790c2781e54d059d766ba58b9d09daf54473fa82303eb\" returns successfully" Sep 9 00:37:29.276995 containerd[1438]: time="2025-09-09T00:37:29.276669908Z" level=info msg="StopPodSandbox for \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\"" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.310 [WARNING][6174] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f592feec-dcfc-407a-bf1e-672a950349d6", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a", Pod:"calico-apiserver-78d549548b-j9zf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bb5c0efd9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.310 [INFO][6174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.310 [INFO][6174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" iface="eth0" netns="" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.310 [INFO][6174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.310 [INFO][6174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.327 [INFO][6182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.327 [INFO][6182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.327 [INFO][6182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.337 [WARNING][6182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.337 [INFO][6182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.338 [INFO][6182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.341730 containerd[1438]: 2025-09-09 00:37:29.340 [INFO][6174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.342412 containerd[1438]: time="2025-09-09T00:37:29.342162423Z" level=info msg="TearDown network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\" successfully" Sep 9 00:37:29.342412 containerd[1438]: time="2025-09-09T00:37:29.342191587Z" level=info msg="StopPodSandbox for \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\" returns successfully" Sep 9 00:37:29.342926 containerd[1438]: time="2025-09-09T00:37:29.342616245Z" level=info msg="RemovePodSandbox for \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\"" Sep 9 00:37:29.342926 containerd[1438]: time="2025-09-09T00:37:29.342642448Z" level=info msg="Forcibly stopping sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\"" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.389 [WARNING][6199] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0", GenerateName:"calico-apiserver-78d549548b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f592feec-dcfc-407a-bf1e-672a950349d6", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d549548b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd29bdef13e7323b9e848ebcb6458b8ba6142329792e5a62f0e492ef7cfa164a", Pod:"calico-apiserver-78d549548b-j9zf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bb5c0efd9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.389 [INFO][6199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.389 [INFO][6199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" iface="eth0" netns="" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.389 [INFO][6199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.389 [INFO][6199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.408 [INFO][6208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.410 [INFO][6208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.410 [INFO][6208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.433 [WARNING][6208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.433 [INFO][6208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" HandleID="k8s-pod-network.bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Workload="localhost-k8s-calico--apiserver--78d549548b--j9zf9-eth0" Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.436 [INFO][6208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.439034 containerd[1438]: 2025-09-09 00:37:29.437 [INFO][6199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba" Sep 9 00:37:29.440241 containerd[1438]: time="2025-09-09T00:37:29.439517073Z" level=info msg="TearDown network for sandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\" successfully" Sep 9 00:37:29.444673 containerd[1438]: time="2025-09-09T00:37:29.444633455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:29.444906 containerd[1438]: time="2025-09-09T00:37:29.444884410Z" level=info msg="RemovePodSandbox \"bac693681be1658e41c16025691bde69e78fa3d56c2d77ceae4e9b32e15317ba\" returns successfully" Sep 9 00:37:29.445555 containerd[1438]: time="2025-09-09T00:37:29.445528018Z" level=info msg="StopPodSandbox for \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\"" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.487 [WARNING][6226] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bczn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b22ce9a-63bb-41fc-bc0b-217e50145ea4", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13", Pod:"csi-node-driver-5bczn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide66a670e1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.487 [INFO][6226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.487 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" iface="eth0" netns="" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.487 [INFO][6226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.488 [INFO][6226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.517 [INFO][6234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.517 [INFO][6234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.517 [INFO][6234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.530 [WARNING][6234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.530 [INFO][6234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.533 [INFO][6234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.538354 containerd[1438]: 2025-09-09 00:37:29.535 [INFO][6226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.538835 containerd[1438]: time="2025-09-09T00:37:29.538399373Z" level=info msg="TearDown network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\" successfully" Sep 9 00:37:29.538835 containerd[1438]: time="2025-09-09T00:37:29.538424376Z" level=info msg="StopPodSandbox for \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\" returns successfully" Sep 9 00:37:29.539016 containerd[1438]: time="2025-09-09T00:37:29.538986693Z" level=info msg="RemovePodSandbox for \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\"" Sep 9 00:37:29.539055 containerd[1438]: time="2025-09-09T00:37:29.539022138Z" level=info msg="Forcibly stopping sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\"" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.589 [WARNING][6252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bczn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b22ce9a-63bb-41fc-bc0b-217e50145ea4", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c20c173ac6e87520f979e28641ff0f46b87138e3041e09f5c69b1204c2fe3e13", Pod:"csi-node-driver-5bczn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide66a670e1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.589 [INFO][6252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.589 [INFO][6252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" iface="eth0" netns="" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.589 [INFO][6252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.589 [INFO][6252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.608 [INFO][6260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.608 [INFO][6260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.608 [INFO][6260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.617 [WARNING][6260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.617 [INFO][6260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" HandleID="k8s-pod-network.6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Workload="localhost-k8s-csi--node--driver--5bczn-eth0" Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.619 [INFO][6260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:29.622641 containerd[1438]: 2025-09-09 00:37:29.620 [INFO][6252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1" Sep 9 00:37:29.622641 containerd[1438]: time="2025-09-09T00:37:29.622325218Z" level=info msg="TearDown network for sandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\" successfully" Sep 9 00:37:29.626483 containerd[1438]: time="2025-09-09T00:37:29.626443704Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:37:29.626608 containerd[1438]: time="2025-09-09T00:37:29.626513394Z" level=info msg="RemovePodSandbox \"6ec6f726627ae1ad80b3f345482005dbc511687b4924eb391f026956aeda74a1\" returns successfully" Sep 9 00:37:33.443308 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:40038.service - OpenSSH per-connection server daemon (10.0.0.1:40038). Sep 9 00:37:33.484024 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 40038 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:33.485453 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:33.489577 systemd-logind[1411]: New session 13 of user core. Sep 9 00:37:33.499739 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:37:33.657025 sshd[6269]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:33.666133 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:40038.service: Deactivated successfully. Sep 9 00:37:33.669055 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:37:33.670587 systemd-logind[1411]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:37:33.680870 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:40044.service - OpenSSH per-connection server daemon (10.0.0.1:40044). Sep 9 00:37:33.681958 systemd-logind[1411]: Removed session 13. Sep 9 00:37:33.714640 sshd[6283]: Accepted publickey for core from 10.0.0.1 port 40044 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:33.716435 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:33.720351 systemd-logind[1411]: New session 14 of user core. Sep 9 00:37:33.727710 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:37:33.926705 sshd[6283]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:33.940321 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:40044.service: Deactivated successfully. Sep 9 00:37:33.942392 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:37:33.945054 systemd-logind[1411]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:37:33.952820 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:40054.service - OpenSSH per-connection server daemon (10.0.0.1:40054). Sep 9 00:37:33.954363 systemd-logind[1411]: Removed session 14. Sep 9 00:37:33.991322 sshd[6296]: Accepted publickey for core from 10.0.0.1 port 40054 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:33.992709 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:33.997673 systemd-logind[1411]: New session 15 of user core. Sep 9 00:37:34.005682 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:37:34.687126 sshd[6296]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:34.703104 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:40054.service: Deactivated successfully. Sep 9 00:37:34.706800 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:37:34.708908 systemd-logind[1411]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:37:34.716899 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:40064.service - OpenSSH per-connection server daemon (10.0.0.1:40064). Sep 9 00:37:34.718239 systemd-logind[1411]: Removed session 15. Sep 9 00:37:34.753781 sshd[6362]: Accepted publickey for core from 10.0.0.1 port 40064 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:34.755301 sshd[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:34.759608 systemd-logind[1411]: New session 16 of user core. Sep 9 00:37:34.768719 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:37:35.211503 sshd[6362]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:35.224221 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:40064.service: Deactivated successfully. Sep 9 00:37:35.228185 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:37:35.230993 systemd-logind[1411]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:37:35.237889 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:40072.service - OpenSSH per-connection server daemon (10.0.0.1:40072). Sep 9 00:37:35.238726 systemd-logind[1411]: Removed session 16. Sep 9 00:37:35.276419 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 40072 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:35.277779 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:35.281607 systemd-logind[1411]: New session 17 of user core. Sep 9 00:37:35.288712 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:37:35.422617 sshd[6375]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:35.425806 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:40072.service: Deactivated successfully. Sep 9 00:37:35.427613 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:37:35.428192 systemd-logind[1411]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:37:35.429133 systemd-logind[1411]: Removed session 17. Sep 9 00:37:40.457893 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:60796.service - OpenSSH per-connection server daemon (10.0.0.1:60796). Sep 9 00:37:40.505448 sshd[6402]: Accepted publickey for core from 10.0.0.1 port 60796 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:40.507235 sshd[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:40.512706 systemd-logind[1411]: New session 18 of user core. Sep 9 00:37:40.528934 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:37:40.668042 sshd[6402]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:40.672964 systemd-logind[1411]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:37:40.673243 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:60796.service: Deactivated successfully. Sep 9 00:37:40.675334 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:37:40.679440 systemd-logind[1411]: Removed session 18. Sep 9 00:37:41.267925 kubelet[2465]: E0909 00:37:41.267871 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:42.267734 kubelet[2465]: E0909 00:37:42.267693 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:45.677442 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:60802.service - OpenSSH per-connection server daemon (10.0.0.1:60802). Sep 9 00:37:45.710140 sshd[6417]: Accepted publickey for core from 10.0.0.1 port 60802 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:45.711370 sshd[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:45.715365 systemd-logind[1411]: New session 19 of user core. Sep 9 00:37:45.722714 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:37:45.870530 sshd[6417]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:45.874487 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:60802.service: Deactivated successfully. Sep 9 00:37:45.876167 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:37:45.878004 systemd-logind[1411]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:37:45.879050 systemd-logind[1411]: Removed session 19. Sep 9 00:37:50.881445 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:39542.service - OpenSSH per-connection server daemon (10.0.0.1:39542). Sep 9 00:37:50.926329 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 39542 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:37:50.927865 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:50.934401 systemd-logind[1411]: New session 20 of user core. Sep 9 00:37:50.950753 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:37:51.494534 sshd[6471]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:51.499013 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:39542.service: Deactivated successfully. Sep 9 00:37:51.501941 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:37:51.505454 systemd-logind[1411]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:37:51.507810 systemd-logind[1411]: Removed session 20. Sep 9 00:37:52.715998 kubelet[2465]: I0909 00:37:52.715952 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"