Sep 5 00:36:17.846291 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 00:36:17.846312 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Sep 4 22:50:35 -00 2025 Sep 5 00:36:17.846321 kernel: KASLR enabled Sep 5 00:36:17.846327 kernel: efi: EFI v2.7 by EDK II Sep 5 00:36:17.846332 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 5 00:36:17.846338 kernel: random: crng init done Sep 5 00:36:17.846345 kernel: ACPI: Early table checksum verification disabled Sep 5 00:36:17.846351 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 5 00:36:17.846357 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:36:17.846365 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846371 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846377 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846382 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846388 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846396 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846403 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846410 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846416 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:17.846422 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 00:36:17.846429 kernel: NUMA: Failed to initialise from firmware Sep 5 00:36:17.846435 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:36:17.846442 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 5 00:36:17.846448 kernel: Zone ranges: Sep 5 00:36:17.846454 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:36:17.846460 kernel: DMA32 empty Sep 5 00:36:17.846475 kernel: Normal empty Sep 5 00:36:17.846481 kernel: Movable zone start for each node Sep 5 00:36:17.846487 kernel: Early memory node ranges Sep 5 00:36:17.846494 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 5 00:36:17.846500 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 5 00:36:17.846506 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 5 00:36:17.846513 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 00:36:17.846519 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 00:36:17.846525 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 00:36:17.846531 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 00:36:17.846537 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:36:17.846543 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 00:36:17.846551 kernel: psci: probing for conduit method from ACPI. Sep 5 00:36:17.846558 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 00:36:17.846564 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 00:36:17.846573 kernel: psci: Trusted OS migration not required Sep 5 00:36:17.846579 kernel: psci: SMC Calling Convention v1.1 Sep 5 00:36:17.846586 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 00:36:17.846594 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 00:36:17.846601 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 00:36:17.846608 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 00:36:17.846615 kernel: Detected PIPT I-cache on CPU0 Sep 5 00:36:17.846621 kernel: CPU features: detected: GIC system register CPU interface Sep 5 00:36:17.846628 kernel: CPU features: detected: Hardware dirty bit management Sep 5 00:36:17.846647 kernel: CPU features: detected: Spectre-v4 Sep 5 00:36:17.846653 kernel: CPU features: detected: Spectre-BHB Sep 5 00:36:17.846660 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 00:36:17.846667 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 00:36:17.846675 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 00:36:17.846681 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 00:36:17.846688 kernel: alternatives: applying boot alternatives Sep 5 00:36:17.846696 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74b18a518d158648275add16e3ab4f37e237ff7b3b2938818abfe7ffe97d585a Sep 5 00:36:17.846706 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:36:17.846713 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:36:17.846720 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:36:17.846726 kernel: Fallback order for Node 0: 0 Sep 5 00:36:17.846733 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 5 00:36:17.846740 kernel: Policy zone: DMA Sep 5 00:36:17.846746 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:36:17.846754 kernel: software IO TLB: area num 4. Sep 5 00:36:17.846761 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 5 00:36:17.846768 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Sep 5 00:36:17.846775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:36:17.846788 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:36:17.846795 kernel: rcu: RCU event tracing is enabled. Sep 5 00:36:17.846802 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:36:17.846809 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:36:17.846816 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:36:17.846823 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:36:17.846829 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:36:17.846837 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 00:36:17.846844 kernel: GICv3: 256 SPIs implemented Sep 5 00:36:17.846850 kernel: GICv3: 0 Extended SPIs implemented Sep 5 00:36:17.846857 kernel: Root IRQ handler: gic_handle_irq Sep 5 00:36:17.846863 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 00:36:17.846870 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 00:36:17.846877 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 00:36:17.846883 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 00:36:17.846890 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 00:36:17.846897 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 5 00:36:17.846914 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 5 00:36:17.846921 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:36:17.846929 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:36:17.846936 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 00:36:17.846942 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 00:36:17.846949 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 00:36:17.846956 kernel: arm-pv: using stolen time PV Sep 5 00:36:17.846963 kernel: Console: colour dummy device 80x25 Sep 5 00:36:17.846969 kernel: ACPI: Core revision 20230628 Sep 5 00:36:17.846976 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 00:36:17.846983 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:36:17.846990 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:36:17.846998 kernel: landlock: Up and running. Sep 5 00:36:17.847005 kernel: SELinux: Initializing. Sep 5 00:36:17.847011 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:36:17.847018 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:36:17.847025 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:17.847037 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:17.847044 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:36:17.847051 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:36:17.847058 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 00:36:17.847066 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 00:36:17.847073 kernel: Remapping and enabling EFI services. Sep 5 00:36:17.847080 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:36:17.847086 kernel: Detected PIPT I-cache on CPU1 Sep 5 00:36:17.847093 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 00:36:17.847100 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 5 00:36:17.847107 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:36:17.847113 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 00:36:17.847120 kernel: Detected PIPT I-cache on CPU2 Sep 5 00:36:17.847127 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 00:36:17.847135 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 5 00:36:17.847142 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:36:17.847153 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 00:36:17.847161 kernel: Detected PIPT I-cache on CPU3 Sep 5 00:36:17.847169 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 00:36:17.847176 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 5 00:36:17.847183 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:36:17.847190 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 00:36:17.847197 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:36:17.847205 kernel: SMP: Total of 4 processors activated. Sep 5 00:36:17.847213 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 00:36:17.847220 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 00:36:17.847227 kernel: CPU features: detected: Common not Private translations Sep 5 00:36:17.847235 kernel: CPU features: detected: CRC32 instructions Sep 5 00:36:17.847242 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 00:36:17.847249 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 00:36:17.847256 kernel: CPU features: detected: LSE atomic instructions Sep 5 00:36:17.847264 kernel: CPU features: detected: Privileged Access Never Sep 5 00:36:17.847272 kernel: CPU features: detected: RAS Extension Support Sep 5 00:36:17.847290 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 00:36:17.847297 kernel: CPU: All CPU(s) started at EL1 Sep 5 00:36:17.847304 kernel: alternatives: applying system-wide alternatives Sep 5 00:36:17.847311 kernel: devtmpfs: initialized Sep 5 00:36:17.847319 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:36:17.847326 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:36:17.847333 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:36:17.847342 kernel: SMBIOS 3.0.0 present. Sep 5 00:36:17.847349 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 5 00:36:17.847356 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:36:17.847363 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 00:36:17.847370 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 00:36:17.847382 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 00:36:17.847389 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:36:17.847396 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 5 00:36:17.847405 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:36:17.847412 kernel: cpuidle: using governor menu Sep 5 00:36:17.847419 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 00:36:17.847426 kernel: ASID allocator initialised with 32768 entries Sep 5 00:36:17.847434 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:36:17.847441 kernel: Serial: AMBA PL011 UART driver Sep 5 00:36:17.847448 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 00:36:17.847455 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 00:36:17.847462 kernel: Modules: 509008 pages in range for PLT usage Sep 5 00:36:17.847474 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:36:17.847483 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:36:17.847490 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 00:36:17.847497 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 00:36:17.847504 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:36:17.847512 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:36:17.847519 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 00:36:17.847526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 00:36:17.847533 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:36:17.847540 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:36:17.847548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:36:17.847556 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:36:17.847563 kernel: ACPI: Interpreter enabled Sep 5 00:36:17.847570 kernel: ACPI: Using GIC for interrupt routing Sep 5 00:36:17.847577 kernel: ACPI: MCFG table detected, 1 entries Sep 5 00:36:17.847584 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 00:36:17.847591 kernel: printk: console [ttyAMA0] enabled Sep 5 00:36:17.847598 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:36:17.847730 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:36:17.847805 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 00:36:17.847868 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 00:36:17.847973 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 00:36:17.848048 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 00:36:17.848059 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 00:36:17.848066 kernel: PCI host bridge to bus 0000:00 Sep 5 00:36:17.848137 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 00:36:17.848197 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 00:36:17.848253 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 00:36:17.848309 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:36:17.848402 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 00:36:17.848492 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:36:17.848564 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 5 00:36:17.848632 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 5 00:36:17.848697 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 00:36:17.848764 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 00:36:17.848836 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 5 00:36:17.848920 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 5 00:36:17.848992 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 00:36:17.849058 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 00:36:17.849147 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 00:36:17.849157 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 00:36:17.849165 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 00:36:17.849172 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 00:36:17.849179 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 00:36:17.849187 kernel: iommu: Default domain type: Translated Sep 5 00:36:17.849199 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 00:36:17.849212 kernel: efivars: Registered efivars operations Sep 5 00:36:17.849219 kernel: vgaarb: loaded Sep 5 00:36:17.849228 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 00:36:17.849235 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:36:17.849243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:36:17.849250 kernel: pnp: PnP ACPI init Sep 5 00:36:17.849332 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 00:36:17.849343 kernel: pnp: PnP ACPI: found 1 devices Sep 5 00:36:17.849350 kernel: NET: Registered PF_INET protocol family Sep 5 00:36:17.849357 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:36:17.849367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:36:17.849374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:36:17.849381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:36:17.849389 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:36:17.849396 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:36:17.849403 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:36:17.849411 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:36:17.849418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:36:17.849425 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:36:17.849433 kernel: kvm [1]: HYP mode not available Sep 5 00:36:17.849441 kernel: Initialise system trusted keyrings Sep 5 00:36:17.849448 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:36:17.849455 kernel: Key type asymmetric registered Sep 5 00:36:17.849462 kernel: Asymmetric key parser 'x509' registered Sep 5 00:36:17.849476 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:36:17.849483 kernel: io scheduler mq-deadline registered Sep 5 00:36:17.849490 kernel: io scheduler kyber registered Sep 5 00:36:17.849497 kernel: io scheduler bfq registered Sep 5 00:36:17.849507 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 00:36:17.849514 kernel: ACPI: button: Power Button [PWRB] Sep 5 00:36:17.849521 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 00:36:17.849592 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 00:36:17.849601 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:36:17.849609 kernel: thunder_xcv, ver 1.0 Sep 5 00:36:17.849616 kernel: thunder_bgx, ver 1.0 Sep 5 00:36:17.849623 kernel: nicpf, ver 1.0 Sep 5 00:36:17.849630 kernel: nicvf, ver 1.0 Sep 5 00:36:17.849718 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 00:36:17.849778 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T00:36:17 UTC (1757032577) Sep 5 00:36:17.849788 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 00:36:17.849796 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 00:36:17.849803 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 00:36:17.849810 kernel: watchdog: Hard watchdog permanently disabled Sep 5 00:36:17.849817 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:36:17.849824 kernel: Segment Routing with IPv6 Sep 5 00:36:17.849834 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:36:17.849841 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:36:17.849849 kernel: Key type dns_resolver registered Sep 5 00:36:17.849856 kernel: registered taskstats version 1 Sep 5 00:36:17.849863 kernel: Loading compiled-in X.509 certificates Sep 5 00:36:17.849870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: ff0f0c0ea2d5fe320cfcc368cee8225e09a20239' Sep 5 00:36:17.849877 kernel: Key type .fscrypt registered Sep 5 00:36:17.849884 kernel: Key type fscrypt-provisioning registered Sep 5 00:36:17.849892 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:36:17.849900 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:36:17.849919 kernel: ima: No architecture policies found Sep 5 00:36:17.849926 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 00:36:17.849933 kernel: clk: Disabling unused clocks Sep 5 00:36:17.849940 kernel: Freeing unused kernel memory: 39424K Sep 5 00:36:17.849947 kernel: Run /init as init process Sep 5 00:36:17.849954 kernel: with arguments: Sep 5 00:36:17.849961 kernel: /init Sep 5 00:36:17.849968 kernel: with environment: Sep 5 00:36:17.849977 kernel: HOME=/ Sep 5 00:36:17.849984 kernel: TERM=linux Sep 5 00:36:17.849991 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:36:17.850000 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:36:17.850009 systemd[1]: Detected virtualization kvm. Sep 5 00:36:17.850017 systemd[1]: Detected architecture arm64. Sep 5 00:36:17.850025 systemd[1]: Running in initrd. Sep 5 00:36:17.850037 systemd[1]: No hostname configured, using default hostname. Sep 5 00:36:17.850045 systemd[1]: Hostname set to . Sep 5 00:36:17.850053 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:36:17.850060 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:36:17.850068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:36:17.850076 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:36:17.850085 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:36:17.850093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:36:17.850102 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:36:17.850110 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:36:17.850119 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:36:17.850128 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:36:17.850135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:36:17.850143 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:36:17.850151 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:36:17.850161 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:36:17.850169 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:36:17.850177 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:36:17.850185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:36:17.850193 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:36:17.850201 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:36:17.850209 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:36:17.850217 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:36:17.850225 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:36:17.850237 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:36:17.850247 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:36:17.850257 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:36:17.850267 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:36:17.850276 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:36:17.850284 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:36:17.850292 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:36:17.850300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:36:17.850309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:17.850317 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:36:17.850325 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:36:17.850333 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:36:17.850391 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 00:36:17.850414 systemd-journald[237]: Journal started Sep 5 00:36:17.850433 systemd-journald[237]: Runtime Journal (/run/log/journal/e549362b77944718b8913cafd8ba1a39) is 5.9M, max 47.3M, 41.4M free. Sep 5 00:36:17.849249 systemd-modules-load[239]: Inserted module 'overlay' Sep 5 00:36:17.859926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:36:17.859952 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:36:17.860937 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:36:17.863742 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 5 00:36:17.864478 kernel: Bridge firewalling registered Sep 5 00:36:17.864404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:17.865549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:36:17.867044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:36:17.884094 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:36:17.885642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:36:17.887367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:36:17.889984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:36:17.897875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:36:17.898984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:36:17.902432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:36:17.905405 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:36:17.909570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:36:17.911279 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:36:17.924234 dracut-cmdline[280]: dracut-dracut-053 Sep 5 00:36:17.926612 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74b18a518d158648275add16e3ab4f37e237ff7b3b2938818abfe7ffe97d585a Sep 5 00:36:17.934157 systemd-resolved[277]: Positive Trust Anchors: Sep 5 00:36:17.934172 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:36:17.934205 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:36:17.938847 systemd-resolved[277]: Defaulting to hostname 'linux'. Sep 5 00:36:17.939847 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:36:17.942317 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:36:17.991932 kernel: SCSI subsystem initialized Sep 5 00:36:17.996921 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:36:18.003956 kernel: iscsi: registered transport (tcp) Sep 5 00:36:18.016932 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:36:18.016954 kernel: QLogic iSCSI HBA Driver Sep 5 00:36:18.058387 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:36:18.068119 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:36:18.082926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:36:18.082971 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:36:18.082981 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:36:18.128937 kernel: raid6: neonx8 gen() 15764 MB/s Sep 5 00:36:18.145917 kernel: raid6: neonx4 gen() 15628 MB/s Sep 5 00:36:18.162921 kernel: raid6: neonx2 gen() 13183 MB/s Sep 5 00:36:18.179914 kernel: raid6: neonx1 gen() 10453 MB/s Sep 5 00:36:18.196915 kernel: raid6: int64x8 gen() 6947 MB/s Sep 5 00:36:18.213919 kernel: raid6: int64x4 gen() 7340 MB/s Sep 5 00:36:18.230915 kernel: raid6: int64x2 gen() 6115 MB/s Sep 5 00:36:18.247935 kernel: raid6: int64x1 gen() 5055 MB/s Sep 5 00:36:18.247965 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s Sep 5 00:36:18.264932 kernel: raid6: .... xor() 12053 MB/s, rmw enabled Sep 5 00:36:18.264945 kernel: raid6: using neon recovery algorithm Sep 5 00:36:18.270016 kernel: xor: measuring software checksum speed Sep 5 00:36:18.270031 kernel: 8regs : 19088 MB/sec Sep 5 00:36:18.271072 kernel: 32regs : 19664 MB/sec Sep 5 00:36:18.271084 kernel: arm64_neon : 26972 MB/sec Sep 5 00:36:18.271098 kernel: xor: using function: arm64_neon (26972 MB/sec) Sep 5 00:36:18.319947 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:36:18.330348 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:36:18.342043 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:36:18.353331 systemd-udevd[463]: Using default interface naming scheme 'v255'. Sep 5 00:36:18.357100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:36:18.366061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:36:18.377469 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Sep 5 00:36:18.403050 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:36:18.413087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:36:18.455059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:36:18.463085 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:36:18.478085 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:36:18.479992 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:36:18.482919 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:36:18.483773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:36:18.491069 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:36:18.495924 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 00:36:18.500028 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:36:18.505759 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:36:18.506846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:36:18.507002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:36:18.511220 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:36:18.511241 kernel: GPT:9289727 != 19775487 Sep 5 00:36:18.511251 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:36:18.509726 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:36:18.515682 kernel: GPT:9289727 != 19775487 Sep 5 00:36:18.515700 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:36:18.515710 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:18.513376 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:36:18.513523 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:18.516572 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:18.526183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:18.530930 kernel: BTRFS: device fsid 5d680510-9485-4285-abb3-c1615b7945ba devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (509) Sep 5 00:36:18.536947 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (517) Sep 5 00:36:18.544237 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:36:18.546072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:18.551219 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:36:18.558548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:36:18.559561 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:36:18.565364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:36:18.580104 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:36:18.582048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:36:18.586533 disk-uuid[552]: Primary Header is updated. Sep 5 00:36:18.586533 disk-uuid[552]: Secondary Entries is updated. Sep 5 00:36:18.586533 disk-uuid[552]: Secondary Header is updated. Sep 5 00:36:18.591335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:18.592936 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:18.595973 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:18.607049 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:36:19.598226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:19.598281 disk-uuid[553]: The operation has completed successfully. Sep 5 00:36:19.619704 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:36:19.620671 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:36:19.649084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:36:19.651825 sh[575]: Success Sep 5 00:36:19.661940 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 00:36:19.687350 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:36:19.701201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:36:19.702655 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:36:19.712796 kernel: BTRFS info (device dm-0): first mount of filesystem 5d680510-9485-4285-abb3-c1615b7945ba Sep 5 00:36:19.712831 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:36:19.712842 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:36:19.712852 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:36:19.713439 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:36:19.717198 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:36:19.718297 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:36:19.719004 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:36:19.721192 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:36:19.730661 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:36:19.730702 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:36:19.730712 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:36:19.734586 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:36:19.740863 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:36:19.742926 kernel: BTRFS info (device vda6): last unmount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:36:19.748941 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:36:19.755046 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:36:19.811243 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:36:19.821563 ignition[669]: Ignition 2.19.0 Sep 5 00:36:19.821572 ignition[669]: Stage: fetch-offline Sep 5 00:36:19.822075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:36:19.821605 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:19.821614 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:19.821770 ignition[669]: parsed url from cmdline: "" Sep 5 00:36:19.821773 ignition[669]: no config URL provided Sep 5 00:36:19.821779 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:36:19.821786 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:36:19.821808 ignition[669]: op(1): [started] loading QEMU firmware config module Sep 5 00:36:19.821812 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:36:19.831593 ignition[669]: op(1): [finished] loading QEMU firmware config module Sep 5 00:36:19.841783 systemd-networkd[764]: lo: Link UP Sep 5 00:36:19.841795 systemd-networkd[764]: lo: Gained carrier Sep 5 00:36:19.842515 systemd-networkd[764]: Enumeration completed Sep 5 00:36:19.842620 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:36:19.842930 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:19.842933 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:36:19.843635 systemd-networkd[764]: eth0: Link UP Sep 5 00:36:19.843638 systemd-networkd[764]: eth0: Gained carrier Sep 5 00:36:19.843644 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:19.844728 systemd[1]: Reached target network.target - Network. Sep 5 00:36:19.861943 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:36:19.883152 ignition[669]: parsing config with SHA512: 1f2cd6cea9f12876b9726894b7006cbbb6cdc467b1dca2ef09f10c7c5ae0c66c49ca5bb6bfad3fc3226465b392c14556ea264a1fddb3f21ff5de982c8ab0e082 Sep 5 00:36:19.887363 unknown[669]: fetched base config from "system" Sep 5 00:36:19.887371 unknown[669]: fetched user config from "qemu" Sep 5 00:36:19.887808 ignition[669]: fetch-offline: fetch-offline passed Sep 5 00:36:19.889256 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:36:19.887869 ignition[669]: Ignition finished successfully Sep 5 00:36:19.890563 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:36:19.902040 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:36:19.912141 ignition[770]: Ignition 2.19.0 Sep 5 00:36:19.912149 ignition[770]: Stage: kargs Sep 5 00:36:19.912307 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:19.912316 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:19.913229 ignition[770]: kargs: kargs passed Sep 5 00:36:19.913274 ignition[770]: Ignition finished successfully Sep 5 00:36:19.916313 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:36:19.927059 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:36:19.936216 ignition[777]: Ignition 2.19.0 Sep 5 00:36:19.936225 ignition[777]: Stage: disks Sep 5 00:36:19.936375 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:19.936384 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:19.937292 ignition[777]: disks: disks passed Sep 5 00:36:19.938795 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:36:19.937336 ignition[777]: Ignition finished successfully Sep 5 00:36:19.940179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:36:19.941516 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:36:19.942799 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:36:19.944284 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:36:19.945773 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:36:19.954047 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:36:19.964271 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:36:19.968146 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:36:19.970806 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:36:20.014932 kernel: EXT4-fs (vda9): mounted filesystem a958ad86-437c-4ed7-b041-6695bea80f66 r/w with ordered data mode. Quota mode: none. Sep 5 00:36:20.015342 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:36:20.016519 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:36:20.028993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:36:20.030483 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:36:20.031966 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:36:20.032006 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:36:20.039927 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (795) Sep 5 00:36:20.039948 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:36:20.039958 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:36:20.039967 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:36:20.032026 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:36:20.042168 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:36:20.038680 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:36:20.048105 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:36:20.049748 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:36:20.078777 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:36:20.082613 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:36:20.085738 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:36:20.089539 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:36:20.156658 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:36:20.167103 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:36:20.168550 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:36:20.173920 kernel: BTRFS info (device vda6): last unmount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:36:20.187110 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:36:20.193637 ignition[907]: INFO : Ignition 2.19.0 Sep 5 00:36:20.193637 ignition[907]: INFO : Stage: mount Sep 5 00:36:20.194978 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:20.194978 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:20.194978 ignition[907]: INFO : mount: mount passed Sep 5 00:36:20.194978 ignition[907]: INFO : Ignition finished successfully Sep 5 00:36:20.195926 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:36:20.204004 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:36:20.711400 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:36:20.719071 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:36:20.725930 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (921) Sep 5 00:36:20.725961 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:36:20.727488 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:36:20.727507 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:36:20.731293 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:36:20.731817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:36:20.747993 ignition[939]: INFO : Ignition 2.19.0 Sep 5 00:36:20.747993 ignition[939]: INFO : Stage: files Sep 5 00:36:20.749222 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:20.749222 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:20.749222 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:36:20.751959 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:36:20.751959 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:36:20.754144 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:36:20.754144 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:36:20.754144 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:36:20.753242 unknown[939]: wrote ssh authorized keys file for user: core Sep 5 00:36:20.758091 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 00:36:20.758091 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 00:36:20.758091 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 00:36:20.758091 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 00:36:20.848233 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 00:36:21.093563 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 00:36:21.093563 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:36:21.096367 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:36:21.108733 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:36:21.108733 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:36:21.108733 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:36:21.108733 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:36:21.108733 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 00:36:21.656223 systemd-networkd[764]: eth0: Gained IPv6LL Sep 5 00:36:21.913565 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 00:36:23.034339 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:36:23.034339 ignition[939]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 5 00:36:23.038388 ignition[939]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:36:23.056708 ignition[939]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:36:23.059639 ignition[939]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:36:23.059639 ignition[939]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:36:23.059639 ignition[939]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:36:23.059639 ignition[939]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:36:23.064186 ignition[939]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:36:23.064186 ignition[939]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:36:23.064186 ignition[939]: INFO : files: files passed Sep 5 00:36:23.064186 ignition[939]: INFO : Ignition finished successfully Sep 5 00:36:23.063945 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:36:23.075067 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:36:23.077217 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:36:23.078321 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:36:23.078414 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:36:23.084719 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:36:23.087081 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:36:23.087081 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:36:23.089445 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:36:23.089747 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:36:23.092046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:36:23.100078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:36:23.118161 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:36:23.118261 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:36:23.120073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:36:23.121536 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:36:23.122965 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:36:23.123707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:36:23.147639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:36:23.160059 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:36:23.168028 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:36:23.168972 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:36:23.170656 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:36:23.172009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:36:23.172121 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:36:23.174045 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:36:23.175579 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:36:23.176989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:36:23.178379 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:36:23.179924 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:36:23.181566 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:36:23.182949 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:36:23.184656 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:36:23.186185 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:36:23.187557 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:36:23.188802 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:36:23.188925 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:36:23.190719 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:36:23.192173 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:36:23.193674 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:36:23.193764 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:36:23.195346 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:36:23.195465 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:36:23.197770 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:36:23.197886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:36:23.199430 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:36:23.200702 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:36:23.200789 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:36:23.202306 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:36:23.203644 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:36:23.204839 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:36:23.204942 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:36:23.206290 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:36:23.206373 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:36:23.207951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:36:23.208062 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:36:23.209468 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:36:23.209577 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:36:23.221071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:36:23.221764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:36:23.221896 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:36:23.227133 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:36:23.227806 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:36:23.227948 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:36:23.229391 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:36:23.232670 ignition[994]: INFO : Ignition 2.19.0 Sep 5 00:36:23.232670 ignition[994]: INFO : Stage: umount Sep 5 00:36:23.232670 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:23.232670 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:23.229505 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:36:23.236290 ignition[994]: INFO : umount: umount passed Sep 5 00:36:23.236290 ignition[994]: INFO : Ignition finished successfully Sep 5 00:36:23.235481 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:36:23.236938 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:36:23.238686 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:36:23.238771 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:36:23.240600 systemd[1]: Stopped target network.target - Network. Sep 5 00:36:23.243157 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:36:23.243227 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:36:23.244417 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:36:23.244464 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:36:23.245839 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:36:23.245880 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:36:23.247809 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:36:23.247862 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:36:23.249412 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:36:23.250866 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:36:23.253040 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:36:23.259902 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:36:23.260039 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:36:23.262480 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:36:23.262549 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:36:23.262965 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 5 00:36:23.264415 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:36:23.265727 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:36:23.267861 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:36:23.267935 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:36:23.278055 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:36:23.278766 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:36:23.278832 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:36:23.280488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:36:23.280534 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:36:23.281817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:36:23.281856 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:36:23.283774 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:36:23.296550 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:36:23.296664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:36:23.298277 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:36:23.298390 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:36:23.300523 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:36:23.300584 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:36:23.301668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:36:23.301700 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:36:23.303379 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:36:23.303430 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:36:23.305441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:36:23.305483 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:36:23.307708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:36:23.307758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:36:23.311592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:36:23.312778 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:36:23.312835 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:36:23.314665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:36:23.314708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:23.316524 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:36:23.316654 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:36:23.318219 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:36:23.318316 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:36:23.320417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:36:23.320527 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:36:23.321997 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:36:23.329063 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:36:23.334829 systemd[1]: Switching root. Sep 5 00:36:23.364714 systemd-journald[237]: Journal stopped Sep 5 00:36:24.046295 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 00:36:24.046358 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:36:24.046373 kernel: SELinux: policy capability open_perms=1 Sep 5 00:36:24.046384 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:36:24.046393 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:36:24.046402 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:36:24.046414 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:36:24.046432 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:36:24.046443 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:36:24.046453 kernel: audit: type=1403 audit(1757032583.532:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:36:24.046463 systemd[1]: Successfully loaded SELinux policy in 32.506ms. Sep 5 00:36:24.046482 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.922ms. Sep 5 00:36:24.046494 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:36:24.046505 systemd[1]: Detected virtualization kvm. Sep 5 00:36:24.046515 systemd[1]: Detected architecture arm64. Sep 5 00:36:24.046525 systemd[1]: Detected first boot. Sep 5 00:36:24.046539 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:36:24.046550 zram_generator::config[1061]: No configuration found. Sep 5 00:36:24.046565 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:36:24.046577 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:36:24.046587 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:36:24.046598 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:36:24.046609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:36:24.046619 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:36:24.046630 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:36:24.046640 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:36:24.046651 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:36:24.046663 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:36:24.046675 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:36:24.046686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:36:24.046697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:36:24.046708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:36:24.046719 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:36:24.046730 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:36:24.046741 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:36:24.046752 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 00:36:24.046764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:36:24.046775 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:36:24.046785 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:36:24.046796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:36:24.046806 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:36:24.046816 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:36:24.046827 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:36:24.046837 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:36:24.046850 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:36:24.046860 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:36:24.046871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:36:24.046886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:36:24.046897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:36:24.046916 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:36:24.046928 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:36:24.046939 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:36:24.046950 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:36:24.046961 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:36:24.046974 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:36:24.046986 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:36:24.046996 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:36:24.047007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:24.047017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:36:24.047028 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:36:24.047038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:24.047048 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:36:24.047060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:24.047071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:36:24.047082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:24.047092 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:36:24.047103 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 5 00:36:24.047114 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 5 00:36:24.047128 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:36:24.047140 kernel: fuse: init (API version 7.39) Sep 5 00:36:24.047150 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:36:24.047163 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:36:24.047174 kernel: loop: module loaded Sep 5 00:36:24.047183 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:36:24.047193 kernel: ACPI: bus type drm_connector registered Sep 5 00:36:24.047203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:36:24.047231 systemd-journald[1136]: Collecting audit messages is disabled. Sep 5 00:36:24.047251 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:36:24.047262 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:36:24.047275 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:36:24.047286 systemd-journald[1136]: Journal started Sep 5 00:36:24.047308 systemd-journald[1136]: Runtime Journal (/run/log/journal/e549362b77944718b8913cafd8ba1a39) is 5.9M, max 47.3M, 41.4M free. Sep 5 00:36:24.049678 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:36:24.050603 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:36:24.051593 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:36:24.052559 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:36:24.053638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:36:24.055400 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:36:24.055570 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:36:24.056798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:24.056968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:24.058122 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:36:24.058270 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:36:24.059401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:24.059554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:24.060812 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:36:24.062215 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:36:24.062361 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:36:24.063408 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:24.063616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:24.064859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:36:24.066117 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:36:24.067515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:36:24.077886 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:36:24.090031 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:36:24.091810 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:36:24.092857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:36:24.094850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:36:24.099211 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:36:24.100226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:36:24.102112 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:36:24.103078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:36:24.105134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:36:24.105503 systemd-journald[1136]: Time spent on flushing to /var/log/journal/e549362b77944718b8913cafd8ba1a39 is 14.406ms for 843 entries. Sep 5 00:36:24.105503 systemd-journald[1136]: System Journal (/var/log/journal/e549362b77944718b8913cafd8ba1a39) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:36:24.138094 systemd-journald[1136]: Received client request to flush runtime journal. Sep 5 00:36:24.110648 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:36:24.113164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:36:24.114773 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:36:24.116052 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:36:24.121286 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:36:24.124354 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:36:24.126725 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:36:24.135309 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:36:24.143412 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:36:24.146568 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 00:36:24.147981 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 5 00:36:24.147997 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 5 00:36:24.152160 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:36:24.162193 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:36:24.180075 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:36:24.191112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:36:24.202088 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 5 00:36:24.202105 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 5 00:36:24.205784 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:36:24.537851 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:36:24.551089 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:36:24.569105 systemd-udevd[1219]: Using default interface naming scheme 'v255'. Sep 5 00:36:24.581373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:36:24.594052 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:36:24.597898 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:36:24.611960 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 5 00:36:24.614947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1235) Sep 5 00:36:24.649667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:36:24.652388 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:36:24.698182 systemd-networkd[1225]: lo: Link UP Sep 5 00:36:24.698195 systemd-networkd[1225]: lo: Gained carrier Sep 5 00:36:24.698839 systemd-networkd[1225]: Enumeration completed Sep 5 00:36:24.699281 systemd-networkd[1225]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:24.699285 systemd-networkd[1225]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:36:24.699827 systemd-networkd[1225]: eth0: Link UP Sep 5 00:36:24.699830 systemd-networkd[1225]: eth0: Gained carrier Sep 5 00:36:24.699841 systemd-networkd[1225]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:24.707113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:24.708089 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:36:24.710553 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:36:24.712986 systemd-networkd[1225]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:36:24.717898 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:36:24.720281 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:36:24.731459 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:36:24.738689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:24.770128 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:36:24.771263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:36:24.784034 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:36:24.788273 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:36:24.826128 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:36:24.827240 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:36:24.828207 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:36:24.828237 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:36:24.828988 systemd[1]: Reached target machines.target - Containers. Sep 5 00:36:24.830625 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:36:24.842044 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:36:24.843920 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:36:24.844836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:24.847755 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:36:24.850799 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:36:24.856156 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:36:24.858079 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:36:24.860224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:36:24.866926 kernel: loop0: detected capacity change from 0 to 114432 Sep 5 00:36:24.869833 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:36:24.871226 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:36:24.878935 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:36:24.916934 kernel: loop1: detected capacity change from 0 to 114328 Sep 5 00:36:24.956942 kernel: loop2: detected capacity change from 0 to 203944 Sep 5 00:36:25.013124 kernel: loop3: detected capacity change from 0 to 114432 Sep 5 00:36:25.021045 kernel: loop4: detected capacity change from 0 to 114328 Sep 5 00:36:25.026939 kernel: loop5: detected capacity change from 0 to 203944 Sep 5 00:36:25.033462 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:36:25.033842 (sd-merge)[1288]: Merged extensions into '/usr'. Sep 5 00:36:25.037408 systemd[1]: Reloading requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:36:25.037435 systemd[1]: Reloading... Sep 5 00:36:25.076072 ldconfig[1269]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:36:25.079921 zram_generator::config[1316]: No configuration found. Sep 5 00:36:25.169383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:36:25.213824 systemd[1]: Reloading finished in 176 ms. Sep 5 00:36:25.232467 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:36:25.233674 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:36:25.256044 systemd[1]: Starting ensure-sysext.service... Sep 5 00:36:25.257711 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:36:25.260774 systemd[1]: Reloading requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:36:25.260787 systemd[1]: Reloading... Sep 5 00:36:25.272603 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:36:25.272854 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:36:25.273494 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:36:25.273711 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 5 00:36:25.273761 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 5 00:36:25.276042 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:36:25.276056 systemd-tmpfiles[1358]: Skipping /boot Sep 5 00:36:25.282829 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:36:25.282846 systemd-tmpfiles[1358]: Skipping /boot Sep 5 00:36:25.309931 zram_generator::config[1389]: No configuration found. Sep 5 00:36:25.386439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:36:25.430635 systemd[1]: Reloading finished in 169 ms. Sep 5 00:36:25.442473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:36:25.460468 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:36:25.462593 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:36:25.464697 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:36:25.468127 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:36:25.473116 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:36:25.480620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:25.483239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:25.487556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:25.497809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:25.499213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:25.500016 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:36:25.501066 augenrules[1453]: No rules Sep 5 00:36:25.501554 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:36:25.504536 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:36:25.505941 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:25.506081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:25.508555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:25.508712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:25.511778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:25.512133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:25.518483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:25.529161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:25.530925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:25.535206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:25.535544 systemd-resolved[1433]: Positive Trust Anchors: Sep 5 00:36:25.535563 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:36:25.535597 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:36:25.536053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:25.538237 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:36:25.539082 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:36:25.540353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:25.540510 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:25.541967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:25.542090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:25.542528 systemd-resolved[1433]: Defaulting to hostname 'linux'. Sep 5 00:36:25.543551 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:25.546437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:25.547715 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:36:25.549173 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:36:25.552488 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:36:25.558123 systemd[1]: Reached target network.target - Network. Sep 5 00:36:25.558888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:36:25.560008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:36:25.571110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:36:25.572797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:36:25.574543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:36:25.578162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:36:25.579013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:36:25.579137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:36:25.579959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:36:25.580089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:36:25.581393 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:36:25.581536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:36:25.582887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:36:25.583031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:36:25.584339 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:36:25.584534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:36:25.587120 systemd[1]: Finished ensure-sysext.service. Sep 5 00:36:25.591010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:36:25.591077 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:36:25.592590 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:36:25.634941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:36:25.635662 systemd-timesyncd[1500]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:36:25.635712 systemd-timesyncd[1500]: Initial clock synchronization to Fri 2025-09-05 00:36:25.454581 UTC. Sep 5 00:36:25.636169 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:36:25.637012 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:36:25.637897 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:36:25.638814 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:36:25.639824 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:36:25.639857 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:36:25.640598 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:36:25.641516 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:36:25.642445 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:36:25.643392 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:36:25.644720 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:36:25.646805 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:36:25.648625 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:36:25.653753 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:36:25.654681 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:36:25.655458 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:36:25.656297 systemd[1]: System is tainted: cgroupsv1 Sep 5 00:36:25.656340 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:36:25.656358 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:36:25.657347 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:36:25.659085 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:36:25.660741 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:36:25.665059 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:36:25.665829 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:36:25.666821 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:36:25.667802 jq[1506]: false Sep 5 00:36:25.669919 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:36:25.675048 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:36:25.676859 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:36:25.682559 dbus-daemon[1505]: [system] SELinux support is enabled Sep 5 00:36:25.684079 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:36:25.685545 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:36:25.686621 extend-filesystems[1507]: Found loop3 Sep 5 00:36:25.686621 extend-filesystems[1507]: Found loop4 Sep 5 00:36:25.686621 extend-filesystems[1507]: Found loop5 Sep 5 00:36:25.686621 extend-filesystems[1507]: Found vda Sep 5 00:36:25.686621 extend-filesystems[1507]: Found vda1 Sep 5 00:36:25.686621 extend-filesystems[1507]: Found vda2 Sep 5 00:36:25.686621 extend-filesystems[1507]: Found vda3 Sep 5 00:36:25.693879 extend-filesystems[1507]: Found usr Sep 5 00:36:25.693879 extend-filesystems[1507]: Found vda4 Sep 5 00:36:25.693879 extend-filesystems[1507]: Found vda6 Sep 5 00:36:25.693879 extend-filesystems[1507]: Found vda7 Sep 5 00:36:25.693879 extend-filesystems[1507]: Found vda9 Sep 5 00:36:25.693879 extend-filesystems[1507]: Checking size of /dev/vda9 Sep 5 00:36:25.695048 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:36:25.697323 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:36:25.699825 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:36:25.706962 jq[1530]: true Sep 5 00:36:25.707688 extend-filesystems[1507]: Resized partition /dev/vda9 Sep 5 00:36:25.709229 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:36:25.709464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:36:25.709702 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:36:25.709900 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:36:25.713242 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:36:25.713440 extend-filesystems[1535]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:36:25.713474 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:36:25.721706 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:36:25.733989 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:36:25.738673 jq[1537]: true Sep 5 00:36:25.741927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1223) Sep 5 00:36:25.741969 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:36:25.747079 tar[1536]: linux-arm64/helm Sep 5 00:36:25.747122 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:36:25.747154 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:36:25.750928 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:36:25.750950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:36:25.757684 systemd-logind[1518]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 00:36:25.758965 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:36:25.759270 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:36:25.759270 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:36:25.759270 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:36:25.759089 systemd-logind[1518]: New seat seat0. Sep 5 00:36:25.781209 update_engine[1527]: I20250905 00:36:25.760427 1527 main.cc:92] Flatcar Update Engine starting Sep 5 00:36:25.781209 update_engine[1527]: I20250905 00:36:25.763249 1527 update_check_scheduler.cc:74] Next update check in 3m20s Sep 5 00:36:25.782239 extend-filesystems[1507]: Resized filesystem in /dev/vda9 Sep 5 00:36:25.759177 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:36:25.776602 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:36:25.782533 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:36:25.785381 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:36:25.790163 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:36:25.795266 bash[1568]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:36:25.800234 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:36:25.802214 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:36:25.845008 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:36:25.907109 containerd[1543]: time="2025-09-05T00:36:25.906987720Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:36:25.929174 containerd[1543]: time="2025-09-05T00:36:25.929133360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930439 containerd[1543]: time="2025-09-05T00:36:25.930384480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930439 containerd[1543]: time="2025-09-05T00:36:25.930421800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:36:25.930439 containerd[1543]: time="2025-09-05T00:36:25.930438520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:36:25.930590 containerd[1543]: time="2025-09-05T00:36:25.930572400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:36:25.930618 containerd[1543]: time="2025-09-05T00:36:25.930593840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930662 containerd[1543]: time="2025-09-05T00:36:25.930646640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930683 containerd[1543]: time="2025-09-05T00:36:25.930661920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930860 containerd[1543]: time="2025-09-05T00:36:25.930828600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930860 containerd[1543]: time="2025-09-05T00:36:25.930852160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930917 containerd[1543]: time="2025-09-05T00:36:25.930865160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930917 containerd[1543]: time="2025-09-05T00:36:25.930875600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:36:25.930983 containerd[1543]: time="2025-09-05T00:36:25.930965840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:36:25.931171 containerd[1543]: time="2025-09-05T00:36:25.931152960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:36:25.931296 containerd[1543]: time="2025-09-05T00:36:25.931279840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:36:25.931322 containerd[1543]: time="2025-09-05T00:36:25.931298160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:36:25.931383 containerd[1543]: time="2025-09-05T00:36:25.931369240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:36:25.931439 containerd[1543]: time="2025-09-05T00:36:25.931425200Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:36:25.934507 containerd[1543]: time="2025-09-05T00:36:25.934473600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:36:25.934551 containerd[1543]: time="2025-09-05T00:36:25.934529240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:36:25.934551 containerd[1543]: time="2025-09-05T00:36:25.934545480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:36:25.934588 containerd[1543]: time="2025-09-05T00:36:25.934559720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:36:25.934588 containerd[1543]: time="2025-09-05T00:36:25.934574600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:36:25.934721 containerd[1543]: time="2025-09-05T00:36:25.934702120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:36:25.935895 containerd[1543]: time="2025-09-05T00:36:25.935123880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:36:25.935895 containerd[1543]: time="2025-09-05T00:36:25.935258160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:36:25.935895 containerd[1543]: time="2025-09-05T00:36:25.935274960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:36:25.935895 containerd[1543]: time="2025-09-05T00:36:25.935287720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:36:25.935895 containerd[1543]: time="2025-09-05T00:36:25.935301440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.935895 containerd[1543]: time="2025-09-05T00:36:25.935313440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.936167 containerd[1543]: time="2025-09-05T00:36:25.935479280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.936202 containerd[1543]: time="2025-09-05T00:36:25.936183360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.936230 containerd[1543]: time="2025-09-05T00:36:25.936216280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.936262 containerd[1543]: time="2025-09-05T00:36:25.936250680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.936286 containerd[1543]: time="2025-09-05T00:36:25.936272160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.936305 containerd[1543]: time="2025-09-05T00:36:25.936286240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:36:25.936323 containerd[1543]: time="2025-09-05T00:36:25.936315120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936358 containerd[1543]: time="2025-09-05T00:36:25.936336960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936387 containerd[1543]: time="2025-09-05T00:36:25.936362040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936387 containerd[1543]: time="2025-09-05T00:36:25.936381360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936442 containerd[1543]: time="2025-09-05T00:36:25.936413000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936442 containerd[1543]: time="2025-09-05T00:36:25.936432680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936478 containerd[1543]: time="2025-09-05T00:36:25.936447840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936478 containerd[1543]: time="2025-09-05T00:36:25.936461600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936518 containerd[1543]: time="2025-09-05T00:36:25.936480040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936518 containerd[1543]: time="2025-09-05T00:36:25.936500320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936554 containerd[1543]: time="2025-09-05T00:36:25.936515960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936554 containerd[1543]: time="2025-09-05T00:36:25.936531920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936588 containerd[1543]: time="2025-09-05T00:36:25.936549280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936588 containerd[1543]: time="2025-09-05T00:36:25.936578640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:36:25.936623 containerd[1543]: time="2025-09-05T00:36:25.936607160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936645 containerd[1543]: time="2025-09-05T00:36:25.936623040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.936645 containerd[1543]: time="2025-09-05T00:36:25.936638200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:36:25.936801 containerd[1543]: time="2025-09-05T00:36:25.936767400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:36:25.936835 containerd[1543]: time="2025-09-05T00:36:25.936795960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:36:25.936835 containerd[1543]: time="2025-09-05T00:36:25.936813240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:36:25.936873 containerd[1543]: time="2025-09-05T00:36:25.936830560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:36:25.936873 containerd[1543]: time="2025-09-05T00:36:25.936845400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.938962 containerd[1543]: time="2025-09-05T00:36:25.938922880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:36:25.938994 containerd[1543]: time="2025-09-05T00:36:25.938968400Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:36:25.939015 containerd[1543]: time="2025-09-05T00:36:25.938985160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:36:25.939386 containerd[1543]: time="2025-09-05T00:36:25.939321040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:36:25.939508 containerd[1543]: time="2025-09-05T00:36:25.939392000Z" level=info msg="Connect containerd service" Sep 5 00:36:25.943165 containerd[1543]: time="2025-09-05T00:36:25.943131720Z" level=info msg="using legacy CRI server" Sep 5 00:36:25.943165 containerd[1543]: time="2025-09-05T00:36:25.943159720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:36:25.943270 containerd[1543]: time="2025-09-05T00:36:25.943248960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:36:25.943844 containerd[1543]: time="2025-09-05T00:36:25.943809800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944265000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944287560Z" level=info msg="Start subscribing containerd event" Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944335720Z" level=info msg="Start recovering state" Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944311200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944395680Z" level=info msg="Start event monitor" Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944417360Z" level=info msg="Start snapshots syncer" Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944427080Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:36:25.945556 containerd[1543]: time="2025-09-05T00:36:25.944434320Z" level=info msg="Start streaming server" Sep 5 00:36:25.944648 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:36:25.945847 containerd[1543]: time="2025-09-05T00:36:25.945827480Z" level=info msg="containerd successfully booted in 0.040866s" Sep 5 00:36:26.049441 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:36:26.067097 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:36:26.083182 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:36:26.088136 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:36:26.088354 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:36:26.091105 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:36:26.099008 tar[1536]: linux-arm64/LICENSE Sep 5 00:36:26.099078 tar[1536]: linux-arm64/README.md Sep 5 00:36:26.102220 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:36:26.105179 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:36:26.107349 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 00:36:26.108722 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:36:26.110573 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:36:26.264051 systemd-networkd[1225]: eth0: Gained IPv6LL Sep 5 00:36:26.266525 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:36:26.267900 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:36:26.278123 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:36:26.280129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:26.281971 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:36:26.295516 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:36:26.295833 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:36:26.297604 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:36:26.300946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:36:26.808779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:26.810123 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:36:26.811998 systemd[1]: Startup finished in 6.355s (kernel) + 3.311s (userspace) = 9.667s. Sep 5 00:36:26.812221 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:36:27.168014 kubelet[1642]: E0905 00:36:27.167886 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:36:27.170464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:36:27.170642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:36:30.097764 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:36:30.108113 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:39134.service - OpenSSH per-connection server daemon (10.0.0.1:39134). Sep 5 00:36:30.148743 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 39134 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:36:30.150308 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:30.157093 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:36:30.165153 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:36:30.166871 systemd-logind[1518]: New session 1 of user core. Sep 5 00:36:30.173476 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:36:30.175355 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:36:30.180845 (systemd)[1661]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:36:30.249579 systemd[1661]: Queued start job for default target default.target. Sep 5 00:36:30.249917 systemd[1661]: Created slice app.slice - User Application Slice. Sep 5 00:36:30.249935 systemd[1661]: Reached target paths.target - Paths. Sep 5 00:36:30.249946 systemd[1661]: Reached target timers.target - Timers. Sep 5 00:36:30.263989 systemd[1661]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:36:30.269335 systemd[1661]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:36:30.269392 systemd[1661]: Reached target sockets.target - Sockets. Sep 5 00:36:30.269404 systemd[1661]: Reached target basic.target - Basic System. Sep 5 00:36:30.269438 systemd[1661]: Reached target default.target - Main User Target. Sep 5 00:36:30.269463 systemd[1661]: Startup finished in 84ms. Sep 5 00:36:30.269740 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:36:30.270941 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:36:30.332138 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:39148.service - OpenSSH per-connection server daemon (10.0.0.1:39148). Sep 5 00:36:30.366468 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 39148 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:36:30.367530 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:30.371021 systemd-logind[1518]: New session 2 of user core. Sep 5 00:36:30.381296 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:36:30.430436 sshd[1673]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:30.447183 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). Sep 5 00:36:30.447558 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:39148.service: Deactivated successfully. Sep 5 00:36:30.449562 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:36:30.450354 systemd-logind[1518]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:36:30.451262 systemd-logind[1518]: Removed session 2. Sep 5 00:36:30.480419 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:36:30.481637 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:30.485968 systemd-logind[1518]: New session 3 of user core. Sep 5 00:36:30.492135 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:36:30.539425 sshd[1678]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:30.549131 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:39160.service - OpenSSH per-connection server daemon (10.0.0.1:39160). Sep 5 00:36:30.549478 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:39156.service: Deactivated successfully. Sep 5 00:36:30.551226 systemd-logind[1518]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:36:30.551810 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:36:30.552818 systemd-logind[1518]: Removed session 3. Sep 5 00:36:30.582478 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 39160 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:36:30.583623 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:30.587331 systemd-logind[1518]: New session 4 of user core. Sep 5 00:36:30.598125 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:36:30.648968 sshd[1686]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:30.662167 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:39168.service - OpenSSH per-connection server daemon (10.0.0.1:39168). Sep 5 00:36:30.662565 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:39160.service: Deactivated successfully. Sep 5 00:36:30.664315 systemd-logind[1518]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:36:30.664884 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:36:30.666013 systemd-logind[1518]: Removed session 4. Sep 5 00:36:30.697585 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 39168 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:36:30.699158 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:30.702621 systemd-logind[1518]: New session 5 of user core. Sep 5 00:36:30.709123 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:36:30.764945 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:36:30.765552 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:30.783723 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 5 00:36:30.785541 sshd[1694]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:30.809267 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:39178.service - OpenSSH per-connection server daemon (10.0.0.1:39178). Sep 5 00:36:30.810023 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:39168.service: Deactivated successfully. Sep 5 00:36:30.811865 systemd-logind[1518]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:36:30.812350 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:36:30.814295 systemd-logind[1518]: Removed session 5. Sep 5 00:36:30.849912 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 39178 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:36:30.851252 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:30.855272 systemd-logind[1518]: New session 6 of user core. Sep 5 00:36:30.865143 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:36:30.916701 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:36:30.917005 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:30.920090 sudo[1711]: pam_unix(sudo:session): session closed for user root Sep 5 00:36:30.924800 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:36:30.925136 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:30.939136 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:36:30.940336 auditctl[1714]: No rules Sep 5 00:36:30.941154 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:36:30.941418 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:36:30.943188 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:36:30.972016 augenrules[1733]: No rules Sep 5 00:36:30.973373 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:36:30.974924 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 5 00:36:30.976744 sshd[1703]: pam_unix(sshd:session): session closed for user core Sep 5 00:36:30.986174 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:39190.service - OpenSSH per-connection server daemon (10.0.0.1:39190). Sep 5 00:36:30.986535 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:39178.service: Deactivated successfully. Sep 5 00:36:30.990393 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:36:30.990948 systemd-logind[1518]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:36:30.992396 systemd-logind[1518]: Removed session 6. Sep 5 00:36:31.023887 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 39190 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:36:31.025221 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:36:31.029898 systemd-logind[1518]: New session 7 of user core. Sep 5 00:36:31.039173 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:36:31.091883 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:36:31.092190 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:36:31.352142 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:36:31.352325 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:36:31.556948 dockerd[1765]: time="2025-09-05T00:36:31.556711764Z" level=info msg="Starting up" Sep 5 00:36:31.784572 dockerd[1765]: time="2025-09-05T00:36:31.784446889Z" level=info msg="Loading containers: start." Sep 5 00:36:31.871931 kernel: Initializing XFRM netlink socket Sep 5 00:36:31.928223 systemd-networkd[1225]: docker0: Link UP Sep 5 00:36:31.945273 dockerd[1765]: time="2025-09-05T00:36:31.945116154Z" level=info msg="Loading containers: done." Sep 5 00:36:31.956054 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1322020183-merged.mount: Deactivated successfully. Sep 5 00:36:31.957830 dockerd[1765]: time="2025-09-05T00:36:31.957791058Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:36:31.958034 dockerd[1765]: time="2025-09-05T00:36:31.958013684Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:36:31.958189 dockerd[1765]: time="2025-09-05T00:36:31.958172104Z" level=info msg="Daemon has completed initialization" Sep 5 00:36:31.983135 dockerd[1765]: time="2025-09-05T00:36:31.983079230Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:36:31.983326 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:36:32.591289 containerd[1543]: time="2025-09-05T00:36:32.591249678Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 00:36:33.180767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523915701.mount: Deactivated successfully. Sep 5 00:36:34.148123 containerd[1543]: time="2025-09-05T00:36:34.148070919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:34.149545 containerd[1543]: time="2025-09-05T00:36:34.148583423Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 5 00:36:34.149924 containerd[1543]: time="2025-09-05T00:36:34.149873317Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:34.154054 containerd[1543]: time="2025-09-05T00:36:34.154011096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:34.155189 containerd[1543]: time="2025-09-05T00:36:34.155153704Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.563864058s" Sep 5 00:36:34.155222 containerd[1543]: time="2025-09-05T00:36:34.155191964Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 00:36:34.156680 containerd[1543]: time="2025-09-05T00:36:34.156647718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 00:36:35.405360 containerd[1543]: time="2025-09-05T00:36:35.405315185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:35.406335 containerd[1543]: time="2025-09-05T00:36:35.406061633Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 5 00:36:35.406985 containerd[1543]: time="2025-09-05T00:36:35.406956225Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:35.409858 containerd[1543]: time="2025-09-05T00:36:35.409824739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:35.411044 containerd[1543]: time="2025-09-05T00:36:35.411009941Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.25430911s" Sep 5 00:36:35.411044 containerd[1543]: time="2025-09-05T00:36:35.411043233Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 00:36:35.411931 containerd[1543]: time="2025-09-05T00:36:35.411488146Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 00:36:36.458414 containerd[1543]: time="2025-09-05T00:36:36.458369727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:36.459318 containerd[1543]: time="2025-09-05T00:36:36.458881582Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 5 00:36:36.459979 containerd[1543]: time="2025-09-05T00:36:36.459949149Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:36.464723 containerd[1543]: time="2025-09-05T00:36:36.464696245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:36.465919 containerd[1543]: time="2025-09-05T00:36:36.465859241Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.054338153s" Sep 5 00:36:36.465919 containerd[1543]: time="2025-09-05T00:36:36.465892402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 00:36:36.466301 containerd[1543]: time="2025-09-05T00:36:36.466273639Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 00:36:37.205993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:36:37.215105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:37.316086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:37.319763 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:36:37.360863 kubelet[1992]: E0905 00:36:37.360820 1992 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:36:37.364628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:36:37.364802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:36:37.456259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292185497.mount: Deactivated successfully. Sep 5 00:36:37.858996 containerd[1543]: time="2025-09-05T00:36:37.858952952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:37.860405 containerd[1543]: time="2025-09-05T00:36:37.860337901Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 5 00:36:37.861154 containerd[1543]: time="2025-09-05T00:36:37.861127886Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:37.863131 containerd[1543]: time="2025-09-05T00:36:37.863083248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:37.863955 containerd[1543]: time="2025-09-05T00:36:37.863877649Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.397572354s" Sep 5 00:36:37.863955 containerd[1543]: time="2025-09-05T00:36:37.863942390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 00:36:37.864498 containerd[1543]: time="2025-09-05T00:36:37.864338119Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:36:38.361050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275918685.mount: Deactivated successfully. Sep 5 00:36:38.980072 containerd[1543]: time="2025-09-05T00:36:38.980014737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:38.980522 containerd[1543]: time="2025-09-05T00:36:38.980491475Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 5 00:36:38.981438 containerd[1543]: time="2025-09-05T00:36:38.981408716Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:38.984569 containerd[1543]: time="2025-09-05T00:36:38.984534884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:38.986813 containerd[1543]: time="2025-09-05T00:36:38.986780484Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.122412754s" Sep 5 00:36:38.986846 containerd[1543]: time="2025-09-05T00:36:38.986813493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 00:36:38.987412 containerd[1543]: time="2025-09-05T00:36:38.987241493Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:36:39.426083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583728112.mount: Deactivated successfully. Sep 5 00:36:39.430199 containerd[1543]: time="2025-09-05T00:36:39.429418914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:39.430199 containerd[1543]: time="2025-09-05T00:36:39.429835447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 00:36:39.430682 containerd[1543]: time="2025-09-05T00:36:39.430638830Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:39.432932 containerd[1543]: time="2025-09-05T00:36:39.432579142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:39.433443 containerd[1543]: time="2025-09-05T00:36:39.433415395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 446.143316ms" Sep 5 00:36:39.433483 containerd[1543]: time="2025-09-05T00:36:39.433447586Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 00:36:39.434281 containerd[1543]: time="2025-09-05T00:36:39.434258181Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 00:36:39.915622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221332410.mount: Deactivated successfully. Sep 5 00:36:41.790735 containerd[1543]: time="2025-09-05T00:36:41.790685746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:41.791960 containerd[1543]: time="2025-09-05T00:36:41.791932847Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 5 00:36:41.793249 containerd[1543]: time="2025-09-05T00:36:41.793210734Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:41.796010 containerd[1543]: time="2025-09-05T00:36:41.795976505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:36:41.797229 containerd[1543]: time="2025-09-05T00:36:41.797189191Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.362900126s" Sep 5 00:36:41.797275 containerd[1543]: time="2025-09-05T00:36:41.797231222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 00:36:47.393542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:36:47.408177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:47.568143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:47.570295 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:36:47.606613 kubelet[2151]: E0905 00:36:47.606544 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:36:47.609010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:36:47.609197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:36:47.702833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:47.712243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:47.730772 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-7.scope)... Sep 5 00:36:47.730794 systemd[1]: Reloading... Sep 5 00:36:47.800954 zram_generator::config[2209]: No configuration found. Sep 5 00:36:47.979078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:36:48.029941 systemd[1]: Reloading finished in 298 ms. Sep 5 00:36:48.070894 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:36:48.070980 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:36:48.071215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:48.080159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:48.171956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:48.175769 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:36:48.206624 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:36:48.206624 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:36:48.206624 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:36:48.206951 kubelet[2265]: I0905 00:36:48.206685 2265 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:36:49.152267 kubelet[2265]: I0905 00:36:49.152226 2265 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:36:49.152267 kubelet[2265]: I0905 00:36:49.152257 2265 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:36:49.152480 kubelet[2265]: I0905 00:36:49.152449 2265 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:36:49.170844 kubelet[2265]: E0905 00:36:49.170809 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:36:49.171243 kubelet[2265]: I0905 00:36:49.171224 2265 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:36:49.177399 kubelet[2265]: E0905 00:36:49.177369 2265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:36:49.177399 kubelet[2265]: I0905 00:36:49.177399 2265 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:36:49.180675 kubelet[2265]: I0905 00:36:49.180651 2265 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:36:49.181162 kubelet[2265]: I0905 00:36:49.181143 2265 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:36:49.181267 kubelet[2265]: I0905 00:36:49.181242 2265 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:36:49.181404 kubelet[2265]: I0905 00:36:49.181268 2265 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 00:36:49.181474 kubelet[2265]: I0905 00:36:49.181470 2265 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:36:49.181500 kubelet[2265]: I0905 00:36:49.181478 2265 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:36:49.181632 kubelet[2265]: I0905 00:36:49.181620 2265 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:36:49.183555 kubelet[2265]: I0905 00:36:49.183536 2265 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:36:49.183594 kubelet[2265]: I0905 00:36:49.183564 2265 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:36:49.183594 kubelet[2265]: I0905 00:36:49.183583 2265 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:36:49.183667 kubelet[2265]: I0905 00:36:49.183656 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:36:49.187954 kubelet[2265]: I0905 00:36:49.187927 2265 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:36:49.188401 kubelet[2265]: W0905 00:36:49.188168 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 5 00:36:49.188401 kubelet[2265]: E0905 00:36:49.188237 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:36:49.188401 kubelet[2265]: W0905 00:36:49.188301 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 5 00:36:49.188401 kubelet[2265]: E0905 00:36:49.188329 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:36:49.188749 kubelet[2265]: I0905 00:36:49.188726 2265 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:36:49.189127 kubelet[2265]: W0905 00:36:49.188894 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:36:49.190001 kubelet[2265]: I0905 00:36:49.189980 2265 server.go:1274] "Started kubelet" Sep 5 00:36:49.190835 kubelet[2265]: I0905 00:36:49.190800 2265 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:36:49.191185 kubelet[2265]: I0905 00:36:49.191143 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:36:49.192137 kubelet[2265]: I0905 00:36:49.191694 2265 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:36:49.194926 kubelet[2265]: I0905 00:36:49.193619 2265 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:36:49.194926 kubelet[2265]: I0905 00:36:49.193629 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:36:49.194926 kubelet[2265]: I0905 00:36:49.194219 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:36:49.194926 kubelet[2265]: I0905 00:36:49.194488 2265 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:36:49.194926 kubelet[2265]: I0905 00:36:49.194568 2265 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:36:49.194926 kubelet[2265]: I0905 00:36:49.194637 2265 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:36:49.195740 kubelet[2265]: E0905 00:36:49.194393 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623bd9518b9eb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:36:49.189961392 +0000 UTC m=+1.011530009,LastTimestamp:2025-09-05 00:36:49.189961392 +0000 UTC m=+1.011530009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:36:49.195740 kubelet[2265]: E0905 00:36:49.195515 2265 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:36:49.195999 kubelet[2265]: W0905 00:36:49.195953 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 5 00:36:49.196142 kubelet[2265]: E0905 00:36:49.196115 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:36:49.196142 kubelet[2265]: E0905 00:36:49.196009 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Sep 5 00:36:49.196485 kubelet[2265]: I0905 00:36:49.196464 2265 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:36:49.196754 kubelet[2265]: I0905 00:36:49.196635 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:36:49.197599 kubelet[2265]: E0905 00:36:49.197580 2265 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:36:49.198025 kubelet[2265]: I0905 00:36:49.198009 2265 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:36:49.208063 kubelet[2265]: I0905 00:36:49.207885 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:36:49.208758 kubelet[2265]: I0905 00:36:49.208736 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:36:49.208758 kubelet[2265]: I0905 00:36:49.208761 2265 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:36:49.208834 kubelet[2265]: I0905 00:36:49.208775 2265 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:36:49.208834 kubelet[2265]: E0905 00:36:49.208815 2265 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:36:49.215556 kubelet[2265]: W0905 00:36:49.215507 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 5 00:36:49.216117 kubelet[2265]: E0905 00:36:49.215787 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:36:49.216616 kubelet[2265]: I0905 00:36:49.216595 2265 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:36:49.216616 kubelet[2265]: I0905 00:36:49.216608 2265 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:36:49.216696 kubelet[2265]: I0905 00:36:49.216625 2265 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:36:49.290322 kubelet[2265]: I0905 00:36:49.290286 2265 policy_none.go:49] "None policy: Start" Sep 5 00:36:49.291003 kubelet[2265]: I0905 00:36:49.290961 2265 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:36:49.291003 kubelet[2265]: I0905 00:36:49.290991 2265 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:36:49.297920 kubelet[2265]: E0905 00:36:49.295810 2265 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:36:49.297920 kubelet[2265]: I0905 00:36:49.295927 2265 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:36:49.297920 kubelet[2265]: I0905 00:36:49.296110 2265 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:36:49.297920 kubelet[2265]: I0905 00:36:49.296121 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:36:49.297920 kubelet[2265]: I0905 00:36:49.296624 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:36:49.297920 kubelet[2265]: E0905 00:36:49.297247 2265 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:36:49.396617 kubelet[2265]: E0905 00:36:49.396563 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Sep 5 00:36:49.396982 kubelet[2265]: I0905 00:36:49.396959 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:36:49.397388 kubelet[2265]: E0905 00:36:49.397362 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 5 00:36:49.495642 kubelet[2265]: I0905 00:36:49.495579 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:49.495642 kubelet[2265]: I0905 00:36:49.495608 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:49.495642 kubelet[2265]: I0905 00:36:49.495626 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f10833a023f009d9e67a56732d96d18-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f10833a023f009d9e67a56732d96d18\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:49.495642 kubelet[2265]: I0905 00:36:49.495642 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f10833a023f009d9e67a56732d96d18-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f10833a023f009d9e67a56732d96d18\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:49.495743 kubelet[2265]: I0905 00:36:49.495657 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:49.495743 kubelet[2265]: I0905 00:36:49.495672 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:49.495743 kubelet[2265]: I0905 00:36:49.495688 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:49.495743 kubelet[2265]: I0905 00:36:49.495705 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:36:49.495837 kubelet[2265]: I0905 00:36:49.495737 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f10833a023f009d9e67a56732d96d18-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f10833a023f009d9e67a56732d96d18\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:49.598730 kubelet[2265]: I0905 00:36:49.598644 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:36:49.598962 kubelet[2265]: E0905 00:36:49.598938 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 5 00:36:49.614189 kubelet[2265]: E0905 00:36:49.614168 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:49.614538 kubelet[2265]: E0905 00:36:49.614472 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:49.614864 containerd[1543]: time="2025-09-05T00:36:49.614597156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f10833a023f009d9e67a56732d96d18,Namespace:kube-system,Attempt:0,}" Sep 5 00:36:49.614864 containerd[1543]: time="2025-09-05T00:36:49.614733653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 00:36:49.616395 kubelet[2265]: E0905 00:36:49.616368 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:49.616700 containerd[1543]: time="2025-09-05T00:36:49.616644286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 00:36:49.797844 kubelet[2265]: E0905 00:36:49.797760 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Sep 5 00:36:50.000749 kubelet[2265]: I0905 00:36:50.000534 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:36:50.000843 kubelet[2265]: E0905 00:36:50.000809 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 5 00:36:50.116896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359942010.mount: Deactivated successfully. Sep 5 00:36:50.123272 containerd[1543]: time="2025-09-05T00:36:50.123234381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:36:50.124092 containerd[1543]: time="2025-09-05T00:36:50.124061821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:36:50.124922 containerd[1543]: time="2025-09-05T00:36:50.124836110Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:36:50.126085 containerd[1543]: time="2025-09-05T00:36:50.125993686Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:36:50.126353 containerd[1543]: time="2025-09-05T00:36:50.126325941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:36:50.129448 containerd[1543]: time="2025-09-05T00:36:50.129407830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:36:50.130149 containerd[1543]: time="2025-09-05T00:36:50.130018788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 5 00:36:50.130892 containerd[1543]: time="2025-09-05T00:36:50.130868208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:36:50.133063 containerd[1543]: time="2025-09-05T00:36:50.133027984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 518.234872ms" Sep 5 00:36:50.134861 containerd[1543]: time="2025-09-05T00:36:50.134832366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.150858ms" Sep 5 00:36:50.136251 containerd[1543]: time="2025-09-05T00:36:50.136214137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.505397ms" Sep 5 00:36:50.227171 containerd[1543]: time="2025-09-05T00:36:50.226894986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:36:50.227171 containerd[1543]: time="2025-09-05T00:36:50.227020430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:36:50.227171 containerd[1543]: time="2025-09-05T00:36:50.227052681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:50.228264 containerd[1543]: time="2025-09-05T00:36:50.228140042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:36:50.228264 containerd[1543]: time="2025-09-05T00:36:50.228179605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:36:50.228264 containerd[1543]: time="2025-09-05T00:36:50.228189796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:50.228411 containerd[1543]: time="2025-09-05T00:36:50.228262649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:50.228411 containerd[1543]: time="2025-09-05T00:36:50.228362837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:50.230616 containerd[1543]: time="2025-09-05T00:36:50.230492241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:36:50.230616 containerd[1543]: time="2025-09-05T00:36:50.230545312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:36:50.230616 containerd[1543]: time="2025-09-05T00:36:50.230557860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:50.231631 containerd[1543]: time="2025-09-05T00:36:50.231572568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:50.277015 containerd[1543]: time="2025-09-05T00:36:50.276954834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"806becb408bc960d4bc18eed19758e6cd86548972a042321cd6a07a8cc997518\"" Sep 5 00:36:50.277940 kubelet[2265]: E0905 00:36:50.277876 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:50.279538 containerd[1543]: time="2025-09-05T00:36:50.279509567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f10833a023f009d9e67a56732d96d18,Namespace:kube-system,Attempt:0,} returns sandbox id \"be443986fd6af6524d732fbd8011e29a373d7c1be67f266df0045caa086d2661\"" Sep 5 00:36:50.280030 kubelet[2265]: E0905 00:36:50.279895 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:50.280309 containerd[1543]: time="2025-09-05T00:36:50.280281698Z" level=info msg="CreateContainer within sandbox \"806becb408bc960d4bc18eed19758e6cd86548972a042321cd6a07a8cc997518\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:36:50.281137 containerd[1543]: time="2025-09-05T00:36:50.281107179Z" level=info msg="CreateContainer within sandbox \"be443986fd6af6524d732fbd8011e29a373d7c1be67f266df0045caa086d2661\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:36:50.282105 containerd[1543]: time="2025-09-05T00:36:50.282079326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f20b5317cd75c2e1a509158eff0252a8f339c799acc4ac9098cc9e47d1b6ae74\"" Sep 5 00:36:50.283297 kubelet[2265]: E0905 00:36:50.283269 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:50.284407 containerd[1543]: time="2025-09-05T00:36:50.284375736Z" level=info msg="CreateContainer within sandbox \"f20b5317cd75c2e1a509158eff0252a8f339c799acc4ac9098cc9e47d1b6ae74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:36:50.295736 containerd[1543]: time="2025-09-05T00:36:50.295698694Z" level=info msg="CreateContainer within sandbox \"be443986fd6af6524d732fbd8011e29a373d7c1be67f266df0045caa086d2661\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09e887efbda414e3e8e4efe34670af44561ca2941a47ed8eecbc0030e6945173\"" Sep 5 00:36:50.296235 containerd[1543]: time="2025-09-05T00:36:50.296204309Z" level=info msg="StartContainer for \"09e887efbda414e3e8e4efe34670af44561ca2941a47ed8eecbc0030e6945173\"" Sep 5 00:36:50.298175 containerd[1543]: time="2025-09-05T00:36:50.298144127Z" level=info msg="CreateContainer within sandbox \"806becb408bc960d4bc18eed19758e6cd86548972a042321cd6a07a8cc997518\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0191e803e87454039b1ce608dcc29e1beb169677420fad10820b05747075f703\"" Sep 5 00:36:50.298497 containerd[1543]: time="2025-09-05T00:36:50.298469988Z" level=info msg="StartContainer for \"0191e803e87454039b1ce608dcc29e1beb169677420fad10820b05747075f703\"" Sep 5 00:36:50.302176 containerd[1543]: time="2025-09-05T00:36:50.302122872Z" level=info msg="CreateContainer within sandbox \"f20b5317cd75c2e1a509158eff0252a8f339c799acc4ac9098cc9e47d1b6ae74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f73a354f2b19e759972514d6dd493c9aebc904c33a2837a115f91b799b086b6\"" Sep 5 00:36:50.302879 containerd[1543]: time="2025-09-05T00:36:50.302833619Z" level=info msg="StartContainer for \"0f73a354f2b19e759972514d6dd493c9aebc904c33a2837a115f91b799b086b6\"" Sep 5 00:36:50.355695 containerd[1543]: time="2025-09-05T00:36:50.355637466Z" level=info msg="StartContainer for \"0f73a354f2b19e759972514d6dd493c9aebc904c33a2837a115f91b799b086b6\" returns successfully" Sep 5 00:36:50.367796 containerd[1543]: time="2025-09-05T00:36:50.367701303Z" level=info msg="StartContainer for \"0191e803e87454039b1ce608dcc29e1beb169677420fad10820b05747075f703\" returns successfully" Sep 5 00:36:50.367796 containerd[1543]: time="2025-09-05T00:36:50.367702821Z" level=info msg="StartContainer for \"09e887efbda414e3e8e4efe34670af44561ca2941a47ed8eecbc0030e6945173\" returns successfully" Sep 5 00:36:50.389584 kubelet[2265]: W0905 00:36:50.389491 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 5 00:36:50.389584 kubelet[2265]: E0905 00:36:50.389556 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:36:50.402206 kubelet[2265]: W0905 00:36:50.402107 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 5 00:36:50.402206 kubelet[2265]: E0905 00:36:50.402172 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:36:50.802711 kubelet[2265]: I0905 00:36:50.802637 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:36:51.231945 kubelet[2265]: E0905 00:36:51.231848 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:51.238286 kubelet[2265]: E0905 00:36:51.238093 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:51.239871 kubelet[2265]: E0905 00:36:51.239833 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:51.811125 kubelet[2265]: E0905 00:36:51.811077 2265 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:36:51.896270 kubelet[2265]: I0905 00:36:51.896233 2265 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:36:52.186948 kubelet[2265]: I0905 00:36:52.186914 2265 apiserver.go:52] "Watching apiserver" Sep 5 00:36:52.195343 kubelet[2265]: I0905 00:36:52.195320 2265 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:36:52.245001 kubelet[2265]: E0905 00:36:52.244965 2265 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:52.245157 kubelet[2265]: E0905 00:36:52.245141 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:53.247793 kubelet[2265]: E0905 00:36:53.247653 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:53.611223 systemd[1]: Reloading requested from client PID 2543 ('systemctl') (unit session-7.scope)... Sep 5 00:36:53.611237 systemd[1]: Reloading... Sep 5 00:36:53.667938 zram_generator::config[2582]: No configuration found. Sep 5 00:36:53.755012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:36:53.813655 systemd[1]: Reloading finished in 202 ms. Sep 5 00:36:53.842725 kubelet[2265]: I0905 00:36:53.842623 2265 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:36:53.842701 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:53.855249 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:36:53.855548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:53.867324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:36:53.960619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:36:53.965244 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:36:54.003391 kubelet[2634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:36:54.003391 kubelet[2634]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:36:54.003391 kubelet[2634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:36:54.003942 kubelet[2634]: I0905 00:36:54.003458 2634 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:36:54.011088 kubelet[2634]: I0905 00:36:54.011053 2634 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:36:54.011088 kubelet[2634]: I0905 00:36:54.011082 2634 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:36:54.011342 kubelet[2634]: I0905 00:36:54.011333 2634 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:36:54.012823 kubelet[2634]: I0905 00:36:54.012801 2634 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:36:54.015039 kubelet[2634]: I0905 00:36:54.015003 2634 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:36:54.023235 kubelet[2634]: E0905 00:36:54.023195 2634 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:36:54.023235 kubelet[2634]: I0905 00:36:54.023235 2634 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:36:54.025756 kubelet[2634]: I0905 00:36:54.025709 2634 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:36:54.026075 kubelet[2634]: I0905 00:36:54.026061 2634 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:36:54.026185 kubelet[2634]: I0905 00:36:54.026147 2634 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:36:54.026337 kubelet[2634]: I0905 00:36:54.026185 2634 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 00:36:54.026419 kubelet[2634]: I0905 00:36:54.026345 2634 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:36:54.026419 kubelet[2634]: I0905 00:36:54.026354 2634 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:36:54.026419 kubelet[2634]: I0905 00:36:54.026386 2634 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:36:54.026503 kubelet[2634]: I0905 00:36:54.026490 2634 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:36:54.026531 kubelet[2634]: I0905 00:36:54.026506 2634 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:36:54.027945 kubelet[2634]: I0905 00:36:54.026523 2634 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:36:54.027945 kubelet[2634]: I0905 00:36:54.027253 2634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:36:54.028123 kubelet[2634]: I0905 00:36:54.028003 2634 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:36:54.028483 kubelet[2634]: I0905 00:36:54.028464 2634 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:36:54.029367 kubelet[2634]: I0905 00:36:54.028853 2634 server.go:1274] "Started kubelet" Sep 5 00:36:54.029595 kubelet[2634]: I0905 00:36:54.029551 2634 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:36:54.029862 kubelet[2634]: I0905 00:36:54.029833 2634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:36:54.030019 kubelet[2634]: I0905 00:36:54.029985 2634 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:36:54.030186 kubelet[2634]: I0905 00:36:54.030144 2634 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:36:54.031568 kubelet[2634]: I0905 00:36:54.031548 2634 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:36:54.032482 kubelet[2634]: I0905 00:36:54.032421 2634 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:36:54.032571 kubelet[2634]: I0905 00:36:54.032523 2634 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:36:54.032660 kubelet[2634]: I0905 00:36:54.032623 2634 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:36:54.034310 kubelet[2634]: I0905 00:36:54.034203 2634 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:36:54.036430 kubelet[2634]: I0905 00:36:54.036409 2634 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:36:54.036596 kubelet[2634]: I0905 00:36:54.036577 2634 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:36:54.040212 kubelet[2634]: E0905 00:36:54.036317 2634 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:36:54.046818 kubelet[2634]: I0905 00:36:54.045074 2634 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:36:54.067653 kubelet[2634]: I0905 00:36:54.067617 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:36:54.070885 kubelet[2634]: I0905 00:36:54.070855 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:36:54.070885 kubelet[2634]: I0905 00:36:54.070883 2634 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:36:54.071723 kubelet[2634]: I0905 00:36:54.071364 2634 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:36:54.071723 kubelet[2634]: E0905 00:36:54.071415 2634 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:36:54.105225 kubelet[2634]: I0905 00:36:54.105182 2634 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:36:54.105225 kubelet[2634]: I0905 00:36:54.105202 2634 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:36:54.105225 kubelet[2634]: I0905 00:36:54.105221 2634 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:36:54.105840 kubelet[2634]: I0905 00:36:54.105785 2634 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:36:54.105840 kubelet[2634]: I0905 00:36:54.105806 2634 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:36:54.105840 kubelet[2634]: I0905 00:36:54.105828 2634 policy_none.go:49] "None policy: Start" Sep 5 00:36:54.106976 kubelet[2634]: I0905 00:36:54.106959 2634 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:36:54.107052 kubelet[2634]: I0905 00:36:54.107044 2634 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:36:54.107438 kubelet[2634]: I0905 00:36:54.107411 2634 state_mem.go:75] "Updated machine memory state" Sep 5 00:36:54.108675 kubelet[2634]: I0905 00:36:54.108655 2634 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:36:54.109339 kubelet[2634]: I0905 00:36:54.108805 2634 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:36:54.109339 kubelet[2634]: I0905 00:36:54.108821 2634 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:36:54.109339 kubelet[2634]: I0905 00:36:54.109200 2634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:36:54.178180 kubelet[2634]: E0905 00:36:54.178133 2634 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:54.214041 kubelet[2634]: I0905 00:36:54.213850 2634 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:36:54.220450 kubelet[2634]: I0905 00:36:54.220411 2634 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 00:36:54.220545 kubelet[2634]: I0905 00:36:54.220493 2634 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:36:54.334134 kubelet[2634]: I0905 00:36:54.334005 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f10833a023f009d9e67a56732d96d18-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f10833a023f009d9e67a56732d96d18\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:54.334134 kubelet[2634]: I0905 00:36:54.334044 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:54.334134 kubelet[2634]: I0905 00:36:54.334063 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:54.334134 kubelet[2634]: I0905 00:36:54.334079 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:54.334134 kubelet[2634]: I0905 00:36:54.334098 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:54.334386 kubelet[2634]: I0905 00:36:54.334139 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:36:54.334386 kubelet[2634]: I0905 00:36:54.334159 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f10833a023f009d9e67a56732d96d18-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f10833a023f009d9e67a56732d96d18\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:54.334386 kubelet[2634]: I0905 00:36:54.334175 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f10833a023f009d9e67a56732d96d18-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f10833a023f009d9e67a56732d96d18\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:54.334386 kubelet[2634]: I0905 00:36:54.334250 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:36:54.477139 kubelet[2634]: E0905 00:36:54.477029 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:54.478107 kubelet[2634]: E0905 00:36:54.478067 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:54.479291 kubelet[2634]: E0905 00:36:54.479202 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:55.027995 kubelet[2634]: I0905 00:36:55.027947 2634 apiserver.go:52] "Watching apiserver" Sep 5 00:36:55.033104 kubelet[2634]: I0905 00:36:55.033049 2634 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:36:55.084502 kubelet[2634]: E0905 00:36:55.084459 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:55.085217 kubelet[2634]: E0905 00:36:55.085185 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:55.091155 kubelet[2634]: E0905 00:36:55.091061 2634 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:36:55.091810 kubelet[2634]: E0905 00:36:55.091773 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:55.116961 kubelet[2634]: I0905 00:36:55.116890 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.11687615 podStartE2EDuration="2.11687615s" podCreationTimestamp="2025-09-05 00:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:36:55.116783393 +0000 UTC m=+1.148457121" watchObservedRunningTime="2025-09-05 00:36:55.11687615 +0000 UTC m=+1.148549878" Sep 5 00:36:55.117091 kubelet[2634]: I0905 00:36:55.117035 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.117030357 podStartE2EDuration="1.117030357s" podCreationTimestamp="2025-09-05 00:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:36:55.109079862 +0000 UTC m=+1.140753590" watchObservedRunningTime="2025-09-05 00:36:55.117030357 +0000 UTC m=+1.148704085" Sep 5 00:36:56.086402 kubelet[2634]: E0905 00:36:56.086036 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:58.611729 kubelet[2634]: I0905 00:36:58.611697 2634 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:36:58.613045 containerd[1543]: time="2025-09-05T00:36:58.613003552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:36:58.613955 kubelet[2634]: I0905 00:36:58.613210 2634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:36:59.580594 kubelet[2634]: I0905 00:36:59.580531 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.580514869 podStartE2EDuration="5.580514869s" podCreationTimestamp="2025-09-05 00:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:36:55.124127134 +0000 UTC m=+1.155800902" watchObservedRunningTime="2025-09-05 00:36:59.580514869 +0000 UTC m=+5.612188597" Sep 5 00:36:59.670104 kubelet[2634]: I0905 00:36:59.670050 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5bb0c29d-ed25-4af9-8e5e-908501837050-kube-proxy\") pod \"kube-proxy-xlcxr\" (UID: \"5bb0c29d-ed25-4af9-8e5e-908501837050\") " pod="kube-system/kube-proxy-xlcxr" Sep 5 00:36:59.670735 kubelet[2634]: I0905 00:36:59.670185 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bb0c29d-ed25-4af9-8e5e-908501837050-xtables-lock\") pod \"kube-proxy-xlcxr\" (UID: \"5bb0c29d-ed25-4af9-8e5e-908501837050\") " pod="kube-system/kube-proxy-xlcxr" Sep 5 00:36:59.670735 kubelet[2634]: I0905 00:36:59.670208 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bb0c29d-ed25-4af9-8e5e-908501837050-lib-modules\") pod \"kube-proxy-xlcxr\" (UID: \"5bb0c29d-ed25-4af9-8e5e-908501837050\") " pod="kube-system/kube-proxy-xlcxr" Sep 5 00:36:59.670735 kubelet[2634]: I0905 00:36:59.670357 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv9lf\" (UniqueName: \"kubernetes.io/projected/5bb0c29d-ed25-4af9-8e5e-908501837050-kube-api-access-wv9lf\") pod \"kube-proxy-xlcxr\" (UID: \"5bb0c29d-ed25-4af9-8e5e-908501837050\") " pod="kube-system/kube-proxy-xlcxr" Sep 5 00:36:59.771091 kubelet[2634]: I0905 00:36:59.771034 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkzr\" (UniqueName: \"kubernetes.io/projected/523e999f-2f82-47af-a41c-72f22d1fc10d-kube-api-access-6hkzr\") pod \"tigera-operator-58fc44c59b-gtzq2\" (UID: \"523e999f-2f82-47af-a41c-72f22d1fc10d\") " pod="tigera-operator/tigera-operator-58fc44c59b-gtzq2" Sep 5 00:36:59.771538 kubelet[2634]: I0905 00:36:59.771300 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/523e999f-2f82-47af-a41c-72f22d1fc10d-var-lib-calico\") pod \"tigera-operator-58fc44c59b-gtzq2\" (UID: \"523e999f-2f82-47af-a41c-72f22d1fc10d\") " pod="tigera-operator/tigera-operator-58fc44c59b-gtzq2" Sep 5 00:36:59.886321 kubelet[2634]: E0905 00:36:59.886233 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:59.887438 containerd[1543]: time="2025-09-05T00:36:59.887060069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlcxr,Uid:5bb0c29d-ed25-4af9-8e5e-908501837050,Namespace:kube-system,Attempt:0,}" Sep 5 00:36:59.907569 containerd[1543]: time="2025-09-05T00:36:59.907470837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:36:59.907569 containerd[1543]: time="2025-09-05T00:36:59.907534316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:36:59.907569 containerd[1543]: time="2025-09-05T00:36:59.907545756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:59.907760 containerd[1543]: time="2025-09-05T00:36:59.907632835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:36:59.946379 containerd[1543]: time="2025-09-05T00:36:59.946339796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlcxr,Uid:5bb0c29d-ed25-4af9-8e5e-908501837050,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae644efa1c4c5dc0c76736a84b4a713157256fe588b15379d7df40e40f72f896\"" Sep 5 00:36:59.946951 kubelet[2634]: E0905 00:36:59.946929 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:36:59.948495 containerd[1543]: time="2025-09-05T00:36:59.948459172Z" level=info msg="CreateContainer within sandbox \"ae644efa1c4c5dc0c76736a84b4a713157256fe588b15379d7df40e40f72f896\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:36:59.960100 containerd[1543]: time="2025-09-05T00:36:59.959987601Z" level=info msg="CreateContainer within sandbox \"ae644efa1c4c5dc0c76736a84b4a713157256fe588b15379d7df40e40f72f896\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28187b59931a79c6a00417b4456b38c5ac838816f2ca1e420a1c72a5d4a88154\"" Sep 5 00:36:59.960711 containerd[1543]: time="2025-09-05T00:36:59.960684313Z" level=info msg="StartContainer for \"28187b59931a79c6a00417b4456b38c5ac838816f2ca1e420a1c72a5d4a88154\"" Sep 5 00:37:00.010805 containerd[1543]: time="2025-09-05T00:37:00.010761151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-gtzq2,Uid:523e999f-2f82-47af-a41c-72f22d1fc10d,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:37:00.011293 containerd[1543]: time="2025-09-05T00:37:00.011265545Z" level=info msg="StartContainer for \"28187b59931a79c6a00417b4456b38c5ac838816f2ca1e420a1c72a5d4a88154\" returns successfully" Sep 5 00:37:00.029655 containerd[1543]: time="2025-09-05T00:37:00.029562989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:00.029655 containerd[1543]: time="2025-09-05T00:37:00.029615389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:00.029655 containerd[1543]: time="2025-09-05T00:37:00.029627108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:00.029846 containerd[1543]: time="2025-09-05T00:37:00.029714187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:00.074731 containerd[1543]: time="2025-09-05T00:37:00.074663625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-gtzq2,Uid:523e999f-2f82-47af-a41c-72f22d1fc10d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9fb283efb5860f56011b41b9d0d0fe28014d2eb889c623f30183ff978645a172\"" Sep 5 00:37:00.077315 containerd[1543]: time="2025-09-05T00:37:00.077118479Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:37:00.094645 kubelet[2634]: E0905 00:37:00.094612 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:01.164462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447742260.mount: Deactivated successfully. Sep 5 00:37:01.448822 kubelet[2634]: E0905 00:37:01.448237 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:01.473933 kubelet[2634]: I0905 00:37:01.473741 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xlcxr" podStartSLOduration=2.473724217 podStartE2EDuration="2.473724217s" podCreationTimestamp="2025-09-05 00:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:00.10402139 +0000 UTC m=+6.135695118" watchObservedRunningTime="2025-09-05 00:37:01.473724217 +0000 UTC m=+7.505397945" Sep 5 00:37:01.583406 containerd[1543]: time="2025-09-05T00:37:01.583360585Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:01.584325 containerd[1543]: time="2025-09-05T00:37:01.584100457Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 00:37:01.585320 containerd[1543]: time="2025-09-05T00:37:01.585079848Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:01.587454 containerd[1543]: time="2025-09-05T00:37:01.587204666Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:01.588243 containerd[1543]: time="2025-09-05T00:37:01.588210096Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.511059497s" Sep 5 00:37:01.588285 containerd[1543]: time="2025-09-05T00:37:01.588247095Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 00:37:01.593326 containerd[1543]: time="2025-09-05T00:37:01.593300244Z" level=info msg="CreateContainer within sandbox \"9fb283efb5860f56011b41b9d0d0fe28014d2eb889c623f30183ff978645a172\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:37:01.602314 containerd[1543]: time="2025-09-05T00:37:01.602239794Z" level=info msg="CreateContainer within sandbox \"9fb283efb5860f56011b41b9d0d0fe28014d2eb889c623f30183ff978645a172\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b269a744f371b8cb5d68a4a81b622daa311d5f16184269032a4df2edc50043ed\"" Sep 5 00:37:01.603267 containerd[1543]: time="2025-09-05T00:37:01.602665589Z" level=info msg="StartContainer for \"b269a744f371b8cb5d68a4a81b622daa311d5f16184269032a4df2edc50043ed\"" Sep 5 00:37:01.648857 containerd[1543]: time="2025-09-05T00:37:01.648816521Z" level=info msg="StartContainer for \"b269a744f371b8cb5d68a4a81b622daa311d5f16184269032a4df2edc50043ed\" returns successfully" Sep 5 00:37:02.047920 kubelet[2634]: E0905 00:37:02.047872 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:02.098279 kubelet[2634]: E0905 00:37:02.097894 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:02.098279 kubelet[2634]: E0905 00:37:02.097946 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:02.120354 kubelet[2634]: I0905 00:37:02.120305 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-gtzq2" podStartSLOduration=1.605223225 podStartE2EDuration="3.120290241s" podCreationTimestamp="2025-09-05 00:36:59 +0000 UTC" firstStartedPulling="2025-09-05 00:37:00.075812213 +0000 UTC m=+6.107485941" lastFinishedPulling="2025-09-05 00:37:01.590879229 +0000 UTC m=+7.622552957" observedRunningTime="2025-09-05 00:37:02.120115883 +0000 UTC m=+8.151789611" watchObservedRunningTime="2025-09-05 00:37:02.120290241 +0000 UTC m=+8.151963969" Sep 5 00:37:03.702516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b269a744f371b8cb5d68a4a81b622daa311d5f16184269032a4df2edc50043ed-rootfs.mount: Deactivated successfully. Sep 5 00:37:03.733706 kubelet[2634]: E0905 00:37:03.733662 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:03.758064 containerd[1543]: time="2025-09-05T00:37:03.735697091Z" level=info msg="shim disconnected" id=b269a744f371b8cb5d68a4a81b622daa311d5f16184269032a4df2edc50043ed namespace=k8s.io Sep 5 00:37:03.758064 containerd[1543]: time="2025-09-05T00:37:03.758062288Z" level=warning msg="cleaning up after shim disconnected" id=b269a744f371b8cb5d68a4a81b622daa311d5f16184269032a4df2edc50043ed namespace=k8s.io Sep 5 00:37:03.760367 containerd[1543]: time="2025-09-05T00:37:03.758077168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:37:04.110709 kubelet[2634]: E0905 00:37:04.110608 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:04.111167 kubelet[2634]: I0905 00:37:04.111124 2634 scope.go:117] "RemoveContainer" containerID="b269a744f371b8cb5d68a4a81b622daa311d5f16184269032a4df2edc50043ed" Sep 5 00:37:04.113866 containerd[1543]: time="2025-09-05T00:37:04.113789675Z" level=info msg="CreateContainer within sandbox \"9fb283efb5860f56011b41b9d0d0fe28014d2eb889c623f30183ff978645a172\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 5 00:37:04.133487 containerd[1543]: time="2025-09-05T00:37:04.133447386Z" level=info msg="CreateContainer within sandbox \"9fb283efb5860f56011b41b9d0d0fe28014d2eb889c623f30183ff978645a172\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"997c8117d62f490ff777db23563c8b0482a1bc26eec402d60385cb05726b63dd\"" Sep 5 00:37:04.133871 containerd[1543]: time="2025-09-05T00:37:04.133851222Z" level=info msg="StartContainer for \"997c8117d62f490ff777db23563c8b0482a1bc26eec402d60385cb05726b63dd\"" Sep 5 00:37:04.184789 containerd[1543]: time="2025-09-05T00:37:04.184736385Z" level=info msg="StartContainer for \"997c8117d62f490ff777db23563c8b0482a1bc26eec402d60385cb05726b63dd\" returns successfully" Sep 5 00:37:06.956983 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:06.960234 sshd[1739]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:06.963121 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:39190.service: Deactivated successfully. Sep 5 00:37:06.964965 systemd-logind[1518]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:37:06.965037 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:37:06.966042 systemd-logind[1518]: Removed session 7. Sep 5 00:37:10.344925 kubelet[2634]: I0905 00:37:10.342475 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67932192-2ac2-4e90-9b59-870ad5476e85-tigera-ca-bundle\") pod \"calico-typha-54c6cfc65c-hx2xb\" (UID: \"67932192-2ac2-4e90-9b59-870ad5476e85\") " pod="calico-system/calico-typha-54c6cfc65c-hx2xb" Sep 5 00:37:10.344925 kubelet[2634]: I0905 00:37:10.343637 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/67932192-2ac2-4e90-9b59-870ad5476e85-typha-certs\") pod \"calico-typha-54c6cfc65c-hx2xb\" (UID: \"67932192-2ac2-4e90-9b59-870ad5476e85\") " pod="calico-system/calico-typha-54c6cfc65c-hx2xb" Sep 5 00:37:10.344925 kubelet[2634]: I0905 00:37:10.343665 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjrq9\" (UniqueName: \"kubernetes.io/projected/67932192-2ac2-4e90-9b59-870ad5476e85-kube-api-access-zjrq9\") pod \"calico-typha-54c6cfc65c-hx2xb\" (UID: \"67932192-2ac2-4e90-9b59-870ad5476e85\") " pod="calico-system/calico-typha-54c6cfc65c-hx2xb" Sep 5 00:37:10.544677 kubelet[2634]: I0905 00:37:10.544555 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-cni-bin-dir\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.544677 kubelet[2634]: I0905 00:37:10.544595 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-flexvol-driver-host\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.544677 kubelet[2634]: I0905 00:37:10.544618 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-policysync\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.544677 kubelet[2634]: I0905 00:37:10.544641 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-var-lib-calico\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545161 kubelet[2634]: I0905 00:37:10.544681 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhgv\" (UniqueName: \"kubernetes.io/projected/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-kube-api-access-rhhgv\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545161 kubelet[2634]: I0905 00:37:10.544733 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-node-certs\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545161 kubelet[2634]: I0905 00:37:10.544755 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-var-run-calico\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545161 kubelet[2634]: I0905 00:37:10.544792 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-lib-modules\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545161 kubelet[2634]: I0905 00:37:10.544818 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-tigera-ca-bundle\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545280 kubelet[2634]: I0905 00:37:10.544864 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-xtables-lock\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545280 kubelet[2634]: I0905 00:37:10.544882 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-cni-log-dir\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.545280 kubelet[2634]: I0905 00:37:10.544895 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8df3f8b3-f383-4fb3-a8bd-a2aa17928943-cni-net-dir\") pod \"calico-node-tbthj\" (UID: \"8df3f8b3-f383-4fb3-a8bd-a2aa17928943\") " pod="calico-system/calico-node-tbthj" Sep 5 00:37:10.639544 kubelet[2634]: E0905 00:37:10.638855 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:10.641115 containerd[1543]: time="2025-09-05T00:37:10.640617238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54c6cfc65c-hx2xb,Uid:67932192-2ac2-4e90-9b59-870ad5476e85,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:10.655075 kubelet[2634]: E0905 00:37:10.654774 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.655075 kubelet[2634]: W0905 00:37:10.654796 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.655733 kubelet[2634]: E0905 00:37:10.655648 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.655733 kubelet[2634]: W0905 00:37:10.655664 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.660915 kubelet[2634]: E0905 00:37:10.658169 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.660915 kubelet[2634]: W0905 00:37:10.658187 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.660915 kubelet[2634]: E0905 00:37:10.660078 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.660915 kubelet[2634]: E0905 00:37:10.660274 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.660915 kubelet[2634]: W0905 00:37:10.660283 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.660915 kubelet[2634]: E0905 00:37:10.660293 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.661462 kubelet[2634]: E0905 00:37:10.661032 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.661462 kubelet[2634]: W0905 00:37:10.661040 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.661462 kubelet[2634]: E0905 00:37:10.661048 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.661462 kubelet[2634]: E0905 00:37:10.661268 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.661462 kubelet[2634]: W0905 00:37:10.661283 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.661462 kubelet[2634]: E0905 00:37:10.661292 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.661764 kubelet[2634]: E0905 00:37:10.661703 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.663322 kubelet[2634]: E0905 00:37:10.662108 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.672131 kubelet[2634]: E0905 00:37:10.669927 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.672131 kubelet[2634]: W0905 00:37:10.669945 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.672131 kubelet[2634]: E0905 00:37:10.669960 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.680032 kubelet[2634]: E0905 00:37:10.679641 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.680032 kubelet[2634]: W0905 00:37:10.679661 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.680032 kubelet[2634]: E0905 00:37:10.679677 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.700340 kubelet[2634]: E0905 00:37:10.699500 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lpmx" podUID="b629d959-9ce0-4662-8a0e-a74c6f7f28b5" Sep 5 00:37:10.707982 containerd[1543]: time="2025-09-05T00:37:10.704523958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:10.707982 containerd[1543]: time="2025-09-05T00:37:10.705072355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:10.707982 containerd[1543]: time="2025-09-05T00:37:10.705086475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:10.707982 containerd[1543]: time="2025-09-05T00:37:10.705208394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:10.746770 kubelet[2634]: E0905 00:37:10.746630 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.746770 kubelet[2634]: W0905 00:37:10.746656 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.746770 kubelet[2634]: E0905 00:37:10.746674 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.748063 kubelet[2634]: E0905 00:37:10.747897 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.748063 kubelet[2634]: W0905 00:37:10.747921 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.748063 kubelet[2634]: E0905 00:37:10.747934 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.748461 kubelet[2634]: E0905 00:37:10.748356 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.748461 kubelet[2634]: W0905 00:37:10.748370 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.748461 kubelet[2634]: E0905 00:37:10.748381 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.748756 kubelet[2634]: E0905 00:37:10.748612 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.748756 kubelet[2634]: W0905 00:37:10.748623 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.748756 kubelet[2634]: E0905 00:37:10.748636 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.749384 kubelet[2634]: E0905 00:37:10.749322 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.749384 kubelet[2634]: W0905 00:37:10.749339 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.749384 kubelet[2634]: E0905 00:37:10.749350 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.749738 kubelet[2634]: E0905 00:37:10.749649 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.749738 kubelet[2634]: W0905 00:37:10.749661 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.749738 kubelet[2634]: E0905 00:37:10.749671 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.749992 kubelet[2634]: E0905 00:37:10.749862 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.749992 kubelet[2634]: W0905 00:37:10.749872 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.749992 kubelet[2634]: E0905 00:37:10.749881 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.750335 kubelet[2634]: E0905 00:37:10.750243 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.750335 kubelet[2634]: W0905 00:37:10.750255 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.750335 kubelet[2634]: E0905 00:37:10.750266 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.750714 kubelet[2634]: E0905 00:37:10.750701 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.750854 kubelet[2634]: W0905 00:37:10.750786 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.750854 kubelet[2634]: E0905 00:37:10.750815 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.751281 kubelet[2634]: E0905 00:37:10.751218 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.751281 kubelet[2634]: W0905 00:37:10.751233 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.751281 kubelet[2634]: E0905 00:37:10.751245 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.751602 kubelet[2634]: E0905 00:37:10.751542 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.751602 kubelet[2634]: W0905 00:37:10.751554 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.751602 kubelet[2634]: E0905 00:37:10.751564 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.752035 kubelet[2634]: E0905 00:37:10.751855 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.752035 kubelet[2634]: W0905 00:37:10.751875 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.752035 kubelet[2634]: E0905 00:37:10.751885 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.752439 kubelet[2634]: E0905 00:37:10.752378 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.752439 kubelet[2634]: W0905 00:37:10.752392 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.752439 kubelet[2634]: E0905 00:37:10.752403 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.753567 kubelet[2634]: E0905 00:37:10.753499 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.753567 kubelet[2634]: W0905 00:37:10.753514 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.753567 kubelet[2634]: E0905 00:37:10.753525 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.754393 kubelet[2634]: E0905 00:37:10.753861 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.754393 kubelet[2634]: W0905 00:37:10.753873 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.754393 kubelet[2634]: E0905 00:37:10.753882 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.755010 kubelet[2634]: E0905 00:37:10.754919 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.755010 kubelet[2634]: W0905 00:37:10.754934 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.755010 kubelet[2634]: E0905 00:37:10.754946 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.755419 kubelet[2634]: E0905 00:37:10.755341 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.755419 kubelet[2634]: W0905 00:37:10.755355 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.755419 kubelet[2634]: E0905 00:37:10.755366 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.755796 kubelet[2634]: E0905 00:37:10.755739 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.755796 kubelet[2634]: W0905 00:37:10.755751 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.755796 kubelet[2634]: E0905 00:37:10.755762 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.756159 kubelet[2634]: E0905 00:37:10.756073 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.756159 kubelet[2634]: W0905 00:37:10.756096 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.756159 kubelet[2634]: E0905 00:37:10.756106 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.756696 containerd[1543]: time="2025-09-05T00:37:10.756495393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54c6cfc65c-hx2xb,Uid:67932192-2ac2-4e90-9b59-870ad5476e85,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d63c2472e076baade6a7ec072caf169422e402db227a4ca7c4256fe194a9a64\"" Sep 5 00:37:10.756756 kubelet[2634]: E0905 00:37:10.756519 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.756756 kubelet[2634]: W0905 00:37:10.756535 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.757582 kubelet[2634]: E0905 00:37:10.756559 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.757750 kubelet[2634]: E0905 00:37:10.757126 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:10.758181 kubelet[2634]: E0905 00:37:10.758019 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.758181 kubelet[2634]: W0905 00:37:10.758035 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.758181 kubelet[2634]: E0905 00:37:10.758047 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.758181 kubelet[2634]: I0905 00:37:10.758068 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wfq\" (UniqueName: \"kubernetes.io/projected/b629d959-9ce0-4662-8a0e-a74c6f7f28b5-kube-api-access-69wfq\") pod \"csi-node-driver-8lpmx\" (UID: \"b629d959-9ce0-4662-8a0e-a74c6f7f28b5\") " pod="calico-system/csi-node-driver-8lpmx" Sep 5 00:37:10.758316 kubelet[2634]: E0905 00:37:10.758263 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.758316 kubelet[2634]: W0905 00:37:10.758271 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.758316 kubelet[2634]: E0905 00:37:10.758278 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.758316 kubelet[2634]: I0905 00:37:10.758292 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b629d959-9ce0-4662-8a0e-a74c6f7f28b5-kubelet-dir\") pod \"csi-node-driver-8lpmx\" (UID: \"b629d959-9ce0-4662-8a0e-a74c6f7f28b5\") " pod="calico-system/csi-node-driver-8lpmx" Sep 5 00:37:10.758695 kubelet[2634]: E0905 00:37:10.758434 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.758695 kubelet[2634]: W0905 00:37:10.758442 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.758695 kubelet[2634]: E0905 00:37:10.758450 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.758695 kubelet[2634]: I0905 00:37:10.758462 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b629d959-9ce0-4662-8a0e-a74c6f7f28b5-registration-dir\") pod \"csi-node-driver-8lpmx\" (UID: \"b629d959-9ce0-4662-8a0e-a74c6f7f28b5\") " pod="calico-system/csi-node-driver-8lpmx" Sep 5 00:37:10.758695 kubelet[2634]: E0905 00:37:10.758599 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.758695 kubelet[2634]: W0905 00:37:10.758607 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.758695 kubelet[2634]: E0905 00:37:10.758617 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.758695 kubelet[2634]: I0905 00:37:10.758631 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b629d959-9ce0-4662-8a0e-a74c6f7f28b5-varrun\") pod \"csi-node-driver-8lpmx\" (UID: \"b629d959-9ce0-4662-8a0e-a74c6f7f28b5\") " pod="calico-system/csi-node-driver-8lpmx" Sep 5 00:37:10.758859 kubelet[2634]: E0905 00:37:10.758773 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.758859 kubelet[2634]: W0905 00:37:10.758781 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.758859 kubelet[2634]: E0905 00:37:10.758789 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.758859 kubelet[2634]: I0905 00:37:10.758803 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b629d959-9ce0-4662-8a0e-a74c6f7f28b5-socket-dir\") pod \"csi-node-driver-8lpmx\" (UID: \"b629d959-9ce0-4662-8a0e-a74c6f7f28b5\") " pod="calico-system/csi-node-driver-8lpmx" Sep 5 00:37:10.759356 kubelet[2634]: E0905 00:37:10.758966 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.759356 kubelet[2634]: W0905 00:37:10.758980 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.759356 kubelet[2634]: E0905 00:37:10.758989 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.759356 kubelet[2634]: E0905 00:37:10.759206 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.759356 kubelet[2634]: W0905 00:37:10.759215 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.759356 kubelet[2634]: E0905 00:37:10.759224 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.761311 kubelet[2634]: E0905 00:37:10.759363 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.761311 kubelet[2634]: W0905 00:37:10.759372 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.761311 kubelet[2634]: E0905 00:37:10.759382 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.761311 kubelet[2634]: E0905 00:37:10.759544 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.761311 kubelet[2634]: W0905 00:37:10.759552 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.761311 kubelet[2634]: E0905 00:37:10.759559 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.761311 kubelet[2634]: E0905 00:37:10.759702 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.761311 kubelet[2634]: W0905 00:37:10.759709 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.761311 kubelet[2634]: E0905 00:37:10.759717 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.761311 kubelet[2634]: E0905 00:37:10.759848 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.761747 containerd[1543]: time="2025-09-05T00:37:10.759716493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:37:10.761784 kubelet[2634]: W0905 00:37:10.759856 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.761784 kubelet[2634]: E0905 00:37:10.759864 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.761784 kubelet[2634]: E0905 00:37:10.760087 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.761784 kubelet[2634]: W0905 00:37:10.760095 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.761784 kubelet[2634]: E0905 00:37:10.760103 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.761784 kubelet[2634]: E0905 00:37:10.760293 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.761784 kubelet[2634]: W0905 00:37:10.760300 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.761784 kubelet[2634]: E0905 00:37:10.760308 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.761784 kubelet[2634]: E0905 00:37:10.760490 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.761784 kubelet[2634]: W0905 00:37:10.760497 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.762007 kubelet[2634]: E0905 00:37:10.760504 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.762007 kubelet[2634]: E0905 00:37:10.760636 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.762007 kubelet[2634]: W0905 00:37:10.760643 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.762007 kubelet[2634]: E0905 00:37:10.760651 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.808569 containerd[1543]: time="2025-09-05T00:37:10.808530907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tbthj,Uid:8df3f8b3-f383-4fb3-a8bd-a2aa17928943,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:10.845452 containerd[1543]: time="2025-09-05T00:37:10.844020685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:10.845452 containerd[1543]: time="2025-09-05T00:37:10.844423282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:10.845452 containerd[1543]: time="2025-09-05T00:37:10.844436762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:10.845452 containerd[1543]: time="2025-09-05T00:37:10.844517282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:10.860073 kubelet[2634]: E0905 00:37:10.860050 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.860233 kubelet[2634]: W0905 00:37:10.860217 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.860302 kubelet[2634]: E0905 00:37:10.860290 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.860990 kubelet[2634]: E0905 00:37:10.860975 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.861084 kubelet[2634]: W0905 00:37:10.861070 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.861183 kubelet[2634]: E0905 00:37:10.861144 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.861812 kubelet[2634]: E0905 00:37:10.861796 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.861846 kubelet[2634]: W0905 00:37:10.861812 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.861846 kubelet[2634]: E0905 00:37:10.861832 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.862575 kubelet[2634]: E0905 00:37:10.862167 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.862575 kubelet[2634]: W0905 00:37:10.862179 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.862575 kubelet[2634]: E0905 00:37:10.862192 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.862575 kubelet[2634]: E0905 00:37:10.862478 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.862575 kubelet[2634]: W0905 00:37:10.862495 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.862575 kubelet[2634]: E0905 00:37:10.862513 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.863347 kubelet[2634]: E0905 00:37:10.863121 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.863347 kubelet[2634]: W0905 00:37:10.863138 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.863347 kubelet[2634]: E0905 00:37:10.863164 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.863647 kubelet[2634]: E0905 00:37:10.863615 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.863647 kubelet[2634]: W0905 00:37:10.863636 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.863718 kubelet[2634]: E0905 00:37:10.863701 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.864436 kubelet[2634]: E0905 00:37:10.864316 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.864436 kubelet[2634]: W0905 00:37:10.864338 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.864436 kubelet[2634]: E0905 00:37:10.864395 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.864604 kubelet[2634]: E0905 00:37:10.864587 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.864604 kubelet[2634]: W0905 00:37:10.864598 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.864604 kubelet[2634]: E0905 00:37:10.864632 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.864604 kubelet[2634]: E0905 00:37:10.864782 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.864981 kubelet[2634]: W0905 00:37:10.864790 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.864981 kubelet[2634]: E0905 00:37:10.864824 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.864981 kubelet[2634]: E0905 00:37:10.864963 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.864981 kubelet[2634]: W0905 00:37:10.864971 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.865114 kubelet[2634]: E0905 00:37:10.865000 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.865184 kubelet[2634]: E0905 00:37:10.865170 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.865184 kubelet[2634]: W0905 00:37:10.865183 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.865240 kubelet[2634]: E0905 00:37:10.865198 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.865400 kubelet[2634]: E0905 00:37:10.865383 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.865400 kubelet[2634]: W0905 00:37:10.865399 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.865456 kubelet[2634]: E0905 00:37:10.865412 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.865685 kubelet[2634]: E0905 00:37:10.865670 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.865685 kubelet[2634]: W0905 00:37:10.865684 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.865761 kubelet[2634]: E0905 00:37:10.865696 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.866345 kubelet[2634]: E0905 00:37:10.866265 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.866345 kubelet[2634]: W0905 00:37:10.866284 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.866345 kubelet[2634]: E0905 00:37:10.866322 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.867160 kubelet[2634]: E0905 00:37:10.866516 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.867160 kubelet[2634]: W0905 00:37:10.866527 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.867160 kubelet[2634]: E0905 00:37:10.866620 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.867160 kubelet[2634]: E0905 00:37:10.866805 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.867160 kubelet[2634]: W0905 00:37:10.866815 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.867160 kubelet[2634]: E0905 00:37:10.866844 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.867330 kubelet[2634]: E0905 00:37:10.867167 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.867330 kubelet[2634]: W0905 00:37:10.867179 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.867330 kubelet[2634]: E0905 00:37:10.867220 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.867454 kubelet[2634]: E0905 00:37:10.867432 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.867488 kubelet[2634]: W0905 00:37:10.867467 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.867510 kubelet[2634]: E0905 00:37:10.867488 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.867790 kubelet[2634]: E0905 00:37:10.867775 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.868016 kubelet[2634]: W0905 00:37:10.867976 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.868056 kubelet[2634]: E0905 00:37:10.868033 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.868428 kubelet[2634]: E0905 00:37:10.868337 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.868428 kubelet[2634]: W0905 00:37:10.868369 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.868428 kubelet[2634]: E0905 00:37:10.868387 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.869167 kubelet[2634]: E0905 00:37:10.869143 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.869323 kubelet[2634]: W0905 00:37:10.869237 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.869323 kubelet[2634]: E0905 00:37:10.869274 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.869551 kubelet[2634]: E0905 00:37:10.869435 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.869551 kubelet[2634]: W0905 00:37:10.869447 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.869551 kubelet[2634]: E0905 00:37:10.869459 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.870521 kubelet[2634]: E0905 00:37:10.870505 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.870725 kubelet[2634]: W0905 00:37:10.870591 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.870725 kubelet[2634]: E0905 00:37:10.870617 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.870890 kubelet[2634]: E0905 00:37:10.870877 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.870968 kubelet[2634]: W0905 00:37:10.870956 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.871021 kubelet[2634]: E0905 00:37:10.871011 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.886110 kubelet[2634]: E0905 00:37:10.886086 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:10.886275 kubelet[2634]: W0905 00:37:10.886220 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:10.886275 kubelet[2634]: E0905 00:37:10.886245 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:10.899538 containerd[1543]: time="2025-09-05T00:37:10.899371378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tbthj,Uid:8df3f8b3-f383-4fb3-a8bd-a2aa17928943,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc33d6b234c06550a1bf65f1022258e76279ea75cf19cb4399b9970e51e3b068\"" Sep 5 00:37:11.149427 update_engine[1527]: I20250905 00:37:11.149331 1527 update_attempter.cc:509] Updating boot flags... Sep 5 00:37:11.175972 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3269) Sep 5 00:37:11.213005 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3274) Sep 5 00:37:12.073516 kubelet[2634]: E0905 00:37:12.072614 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lpmx" podUID="b629d959-9ce0-4662-8a0e-a74c6f7f28b5" Sep 5 00:37:12.317038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859218293.mount: Deactivated successfully. Sep 5 00:37:13.182535 containerd[1543]: time="2025-09-05T00:37:13.182489747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:13.184292 containerd[1543]: time="2025-09-05T00:37:13.184098458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 00:37:13.185454 containerd[1543]: time="2025-09-05T00:37:13.185408771Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:13.187598 containerd[1543]: time="2025-09-05T00:37:13.187571839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:13.188282 containerd[1543]: time="2025-09-05T00:37:13.188253316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.428506703s" Sep 5 00:37:13.188342 containerd[1543]: time="2025-09-05T00:37:13.188284435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 00:37:13.190470 containerd[1543]: time="2025-09-05T00:37:13.189521589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:37:13.203662 containerd[1543]: time="2025-09-05T00:37:13.203389514Z" level=info msg="CreateContainer within sandbox \"4d63c2472e076baade6a7ec072caf169422e402db227a4ca7c4256fe194a9a64\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:37:13.238995 containerd[1543]: time="2025-09-05T00:37:13.238953482Z" level=info msg="CreateContainer within sandbox \"4d63c2472e076baade6a7ec072caf169422e402db227a4ca7c4256fe194a9a64\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a2d1548ce6f4faf2e2134160601d3062fad5dbd140ad385ee9dd2b67b0cf8271\"" Sep 5 00:37:13.240341 containerd[1543]: time="2025-09-05T00:37:13.239704638Z" level=info msg="StartContainer for \"a2d1548ce6f4faf2e2134160601d3062fad5dbd140ad385ee9dd2b67b0cf8271\"" Sep 5 00:37:13.295860 containerd[1543]: time="2025-09-05T00:37:13.295822455Z" level=info msg="StartContainer for \"a2d1548ce6f4faf2e2134160601d3062fad5dbd140ad385ee9dd2b67b0cf8271\" returns successfully" Sep 5 00:37:14.072744 kubelet[2634]: E0905 00:37:14.071826 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lpmx" podUID="b629d959-9ce0-4662-8a0e-a74c6f7f28b5" Sep 5 00:37:14.134154 kubelet[2634]: E0905 00:37:14.134117 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:14.184267 kubelet[2634]: E0905 00:37:14.184094 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.184267 kubelet[2634]: W0905 00:37:14.184124 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.184267 kubelet[2634]: E0905 00:37:14.184144 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.184979 kubelet[2634]: E0905 00:37:14.184385 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.184979 kubelet[2634]: W0905 00:37:14.184398 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.184979 kubelet[2634]: E0905 00:37:14.184410 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.184979 kubelet[2634]: E0905 00:37:14.184646 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.184979 kubelet[2634]: W0905 00:37:14.184665 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.184979 kubelet[2634]: E0905 00:37:14.184680 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.184979 kubelet[2634]: E0905 00:37:14.184976 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.185156 kubelet[2634]: W0905 00:37:14.184988 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.185156 kubelet[2634]: E0905 00:37:14.185000 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.185325 kubelet[2634]: E0905 00:37:14.185311 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.185358 kubelet[2634]: W0905 00:37:14.185325 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.185358 kubelet[2634]: E0905 00:37:14.185336 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.185692 kubelet[2634]: E0905 00:37:14.185678 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.185692 kubelet[2634]: W0905 00:37:14.185690 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.185768 kubelet[2634]: E0905 00:37:14.185701 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.186524 kubelet[2634]: E0905 00:37:14.186023 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.186524 kubelet[2634]: W0905 00:37:14.186037 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.186524 kubelet[2634]: E0905 00:37:14.186048 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.186524 kubelet[2634]: E0905 00:37:14.186404 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.186524 kubelet[2634]: W0905 00:37:14.186415 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.186524 kubelet[2634]: E0905 00:37:14.186425 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.187075 kubelet[2634]: E0905 00:37:14.187020 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.187075 kubelet[2634]: W0905 00:37:14.187060 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.187322 kubelet[2634]: E0905 00:37:14.187206 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.187573 kubelet[2634]: E0905 00:37:14.187427 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.187573 kubelet[2634]: W0905 00:37:14.187438 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.187573 kubelet[2634]: E0905 00:37:14.187449 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.187780 kubelet[2634]: E0905 00:37:14.187728 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.187780 kubelet[2634]: W0905 00:37:14.187766 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.188022 kubelet[2634]: E0905 00:37:14.187882 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.188220 kubelet[2634]: E0905 00:37:14.188150 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.188220 kubelet[2634]: W0905 00:37:14.188206 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.188470 kubelet[2634]: E0905 00:37:14.188344 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.188779 kubelet[2634]: E0905 00:37:14.188605 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.188779 kubelet[2634]: W0905 00:37:14.188627 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.188779 kubelet[2634]: E0905 00:37:14.188638 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.190045 kubelet[2634]: E0905 00:37:14.189455 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.190045 kubelet[2634]: W0905 00:37:14.189466 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.190045 kubelet[2634]: E0905 00:37:14.189477 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.190600 kubelet[2634]: E0905 00:37:14.190261 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.190600 kubelet[2634]: W0905 00:37:14.190280 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.190600 kubelet[2634]: E0905 00:37:14.190310 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.193269 kubelet[2634]: E0905 00:37:14.191814 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.193269 kubelet[2634]: W0905 00:37:14.191826 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.193269 kubelet[2634]: E0905 00:37:14.192605 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.193269 kubelet[2634]: E0905 00:37:14.192837 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.193269 kubelet[2634]: W0905 00:37:14.192847 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.193269 kubelet[2634]: E0905 00:37:14.192858 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.199989 kubelet[2634]: E0905 00:37:14.198415 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.199989 kubelet[2634]: W0905 00:37:14.198436 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.199989 kubelet[2634]: E0905 00:37:14.198459 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.199989 kubelet[2634]: E0905 00:37:14.198789 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.199989 kubelet[2634]: W0905 00:37:14.198802 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.199989 kubelet[2634]: E0905 00:37:14.198846 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.199989 kubelet[2634]: E0905 00:37:14.199710 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.199989 kubelet[2634]: W0905 00:37:14.199732 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.199989 kubelet[2634]: E0905 00:37:14.199930 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.200981 kubelet[2634]: E0905 00:37:14.200967 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.201082 kubelet[2634]: W0905 00:37:14.201069 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.201509 kubelet[2634]: E0905 00:37:14.201199 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.201698 kubelet[2634]: E0905 00:37:14.201685 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.201759 kubelet[2634]: W0905 00:37:14.201749 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.201897 kubelet[2634]: E0905 00:37:14.201885 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.203008 kubelet[2634]: E0905 00:37:14.202993 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.203195 kubelet[2634]: W0905 00:37:14.203095 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.203195 kubelet[2634]: E0905 00:37:14.203177 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.203534 kubelet[2634]: E0905 00:37:14.203446 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.203534 kubelet[2634]: W0905 00:37:14.203459 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.203534 kubelet[2634]: E0905 00:37:14.203516 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.204614 kubelet[2634]: E0905 00:37:14.204474 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.204614 kubelet[2634]: W0905 00:37:14.204490 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.204614 kubelet[2634]: E0905 00:37:14.204533 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.207004 kubelet[2634]: E0905 00:37:14.206937 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.207004 kubelet[2634]: W0905 00:37:14.206952 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.207086 kubelet[2634]: E0905 00:37:14.207003 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.207798 kubelet[2634]: E0905 00:37:14.207684 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.207798 kubelet[2634]: W0905 00:37:14.207697 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.207798 kubelet[2634]: E0905 00:37:14.207733 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.208120 kubelet[2634]: E0905 00:37:14.208007 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.208120 kubelet[2634]: W0905 00:37:14.208020 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.208120 kubelet[2634]: E0905 00:37:14.208063 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.208413 kubelet[2634]: E0905 00:37:14.208311 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.208413 kubelet[2634]: W0905 00:37:14.208325 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.208413 kubelet[2634]: E0905 00:37:14.208344 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.208756 kubelet[2634]: E0905 00:37:14.208553 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.208756 kubelet[2634]: W0905 00:37:14.208565 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.208756 kubelet[2634]: E0905 00:37:14.208583 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.208991 kubelet[2634]: E0905 00:37:14.208886 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.209040 kubelet[2634]: W0905 00:37:14.208993 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.209040 kubelet[2634]: E0905 00:37:14.209013 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.209726 kubelet[2634]: E0905 00:37:14.209688 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.209726 kubelet[2634]: W0905 00:37:14.209718 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.209801 kubelet[2634]: E0905 00:37:14.209779 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:14.210300 kubelet[2634]: E0905 00:37:14.210280 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:14.210300 kubelet[2634]: W0905 00:37:14.210295 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:14.210394 kubelet[2634]: E0905 00:37:14.210321 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.127549 containerd[1543]: time="2025-09-05T00:37:15.127506092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:15.128445 containerd[1543]: time="2025-09-05T00:37:15.128337168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 00:37:15.129435 containerd[1543]: time="2025-09-05T00:37:15.129386483Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:15.131612 containerd[1543]: time="2025-09-05T00:37:15.131582632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:15.132336 containerd[1543]: time="2025-09-05T00:37:15.132297509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.94271332s" Sep 5 00:37:15.132336 containerd[1543]: time="2025-09-05T00:37:15.132336388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 00:37:15.134560 containerd[1543]: time="2025-09-05T00:37:15.134318379Z" level=info msg="CreateContainer within sandbox \"fc33d6b234c06550a1bf65f1022258e76279ea75cf19cb4399b9970e51e3b068\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:37:15.137110 kubelet[2634]: I0905 00:37:15.137059 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:15.137542 kubelet[2634]: E0905 00:37:15.137523 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:15.171348 containerd[1543]: time="2025-09-05T00:37:15.171303517Z" level=info msg="CreateContainer within sandbox \"fc33d6b234c06550a1bf65f1022258e76279ea75cf19cb4399b9970e51e3b068\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f56f3286532496fd1b3d2fa63491878b7ec663aa50934f617c412d5c8abaaef5\"" Sep 5 00:37:15.171754 containerd[1543]: time="2025-09-05T00:37:15.171712315Z" level=info msg="StartContainer for \"f56f3286532496fd1b3d2fa63491878b7ec663aa50934f617c412d5c8abaaef5\"" Sep 5 00:37:15.196100 kubelet[2634]: E0905 00:37:15.196071 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.196100 kubelet[2634]: W0905 00:37:15.196099 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.196296 kubelet[2634]: E0905 00:37:15.196120 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.196345 kubelet[2634]: E0905 00:37:15.196331 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.196345 kubelet[2634]: W0905 00:37:15.196346 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.196409 kubelet[2634]: E0905 00:37:15.196355 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.196589 kubelet[2634]: E0905 00:37:15.196577 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.196620 kubelet[2634]: W0905 00:37:15.196589 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.196620 kubelet[2634]: E0905 00:37:15.196598 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.196829 kubelet[2634]: E0905 00:37:15.196812 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.196829 kubelet[2634]: W0905 00:37:15.196829 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.196890 kubelet[2634]: E0905 00:37:15.196838 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.197056 kubelet[2634]: E0905 00:37:15.197044 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.197100 kubelet[2634]: W0905 00:37:15.197057 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.197100 kubelet[2634]: E0905 00:37:15.197074 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.197284 kubelet[2634]: E0905 00:37:15.197273 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.197284 kubelet[2634]: W0905 00:37:15.197284 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.197344 kubelet[2634]: E0905 00:37:15.197292 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.197469 kubelet[2634]: E0905 00:37:15.197457 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.197500 kubelet[2634]: W0905 00:37:15.197471 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.197500 kubelet[2634]: E0905 00:37:15.197479 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.197631 kubelet[2634]: E0905 00:37:15.197620 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.197631 kubelet[2634]: W0905 00:37:15.197630 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.197691 kubelet[2634]: E0905 00:37:15.197638 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.197789 kubelet[2634]: E0905 00:37:15.197775 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.197789 kubelet[2634]: W0905 00:37:15.197789 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.197847 kubelet[2634]: E0905 00:37:15.197797 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.198604 kubelet[2634]: E0905 00:37:15.197922 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.198604 kubelet[2634]: W0905 00:37:15.197931 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.198604 kubelet[2634]: E0905 00:37:15.197939 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.198604 kubelet[2634]: E0905 00:37:15.198268 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.198604 kubelet[2634]: W0905 00:37:15.198276 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.198604 kubelet[2634]: E0905 00:37:15.198286 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.198604 kubelet[2634]: E0905 00:37:15.198470 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.198604 kubelet[2634]: W0905 00:37:15.198478 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.198604 kubelet[2634]: E0905 00:37:15.198486 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.201235 kubelet[2634]: E0905 00:37:15.201214 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.201235 kubelet[2634]: W0905 00:37:15.201228 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.201348 kubelet[2634]: E0905 00:37:15.201258 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.201517 kubelet[2634]: E0905 00:37:15.201498 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.201517 kubelet[2634]: W0905 00:37:15.201513 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.201517 kubelet[2634]: E0905 00:37:15.201523 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.201719 kubelet[2634]: E0905 00:37:15.201703 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.201719 kubelet[2634]: W0905 00:37:15.201716 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.201777 kubelet[2634]: E0905 00:37:15.201726 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.205086 kubelet[2634]: E0905 00:37:15.205069 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.205191 kubelet[2634]: W0905 00:37:15.205177 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.205256 kubelet[2634]: E0905 00:37:15.205244 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.205588 kubelet[2634]: E0905 00:37:15.205476 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.205588 kubelet[2634]: W0905 00:37:15.205490 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.205588 kubelet[2634]: E0905 00:37:15.205505 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.205739 kubelet[2634]: E0905 00:37:15.205727 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.205795 kubelet[2634]: W0905 00:37:15.205783 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.205868 kubelet[2634]: E0905 00:37:15.205856 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.206106 kubelet[2634]: E0905 00:37:15.206080 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.206106 kubelet[2634]: W0905 00:37:15.206103 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.206183 kubelet[2634]: E0905 00:37:15.206122 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.206327 kubelet[2634]: E0905 00:37:15.206315 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.206327 kubelet[2634]: W0905 00:37:15.206325 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.206380 kubelet[2634]: E0905 00:37:15.206337 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.206498 kubelet[2634]: E0905 00:37:15.206489 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.206523 kubelet[2634]: W0905 00:37:15.206498 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.206523 kubelet[2634]: E0905 00:37:15.206510 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.206714 kubelet[2634]: E0905 00:37:15.206702 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.206714 kubelet[2634]: W0905 00:37:15.206712 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.206785 kubelet[2634]: E0905 00:37:15.206724 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.207133 kubelet[2634]: E0905 00:37:15.207021 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.207133 kubelet[2634]: W0905 00:37:15.207035 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.207133 kubelet[2634]: E0905 00:37:15.207052 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.207281 kubelet[2634]: E0905 00:37:15.207270 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.207343 kubelet[2634]: W0905 00:37:15.207332 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.207438 kubelet[2634]: E0905 00:37:15.207411 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.207593 kubelet[2634]: E0905 00:37:15.207580 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.207731 kubelet[2634]: W0905 00:37:15.207643 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.207731 kubelet[2634]: E0905 00:37:15.207673 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.207849 kubelet[2634]: E0905 00:37:15.207837 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.207932 kubelet[2634]: W0905 00:37:15.207894 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.208013 kubelet[2634]: E0905 00:37:15.208000 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.208270 kubelet[2634]: E0905 00:37:15.208252 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.208270 kubelet[2634]: W0905 00:37:15.208267 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.208348 kubelet[2634]: E0905 00:37:15.208282 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.208505 kubelet[2634]: E0905 00:37:15.208494 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.208542 kubelet[2634]: W0905 00:37:15.208507 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.208542 kubelet[2634]: E0905 00:37:15.208530 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.208866 kubelet[2634]: E0905 00:37:15.208853 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.208866 kubelet[2634]: W0905 00:37:15.208866 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.208955 kubelet[2634]: E0905 00:37:15.208941 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.209147 kubelet[2634]: E0905 00:37:15.209135 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.209147 kubelet[2634]: W0905 00:37:15.209147 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.209210 kubelet[2634]: E0905 00:37:15.209160 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.209357 kubelet[2634]: E0905 00:37:15.209345 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.209388 kubelet[2634]: W0905 00:37:15.209357 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.209388 kubelet[2634]: E0905 00:37:15.209369 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.209774 kubelet[2634]: E0905 00:37:15.209758 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.209813 kubelet[2634]: W0905 00:37:15.209774 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.209896 kubelet[2634]: E0905 00:37:15.209786 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.210110 kubelet[2634]: E0905 00:37:15.210088 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:37:15.210144 kubelet[2634]: W0905 00:37:15.210110 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:37:15.210144 kubelet[2634]: E0905 00:37:15.210125 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:37:15.226920 containerd[1543]: time="2025-09-05T00:37:15.226875125Z" level=info msg="StartContainer for \"f56f3286532496fd1b3d2fa63491878b7ec663aa50934f617c412d5c8abaaef5\" returns successfully" Sep 5 00:37:15.256784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f56f3286532496fd1b3d2fa63491878b7ec663aa50934f617c412d5c8abaaef5-rootfs.mount: Deactivated successfully. Sep 5 00:37:15.265693 containerd[1543]: time="2025-09-05T00:37:15.265630375Z" level=info msg="shim disconnected" id=f56f3286532496fd1b3d2fa63491878b7ec663aa50934f617c412d5c8abaaef5 namespace=k8s.io Sep 5 00:37:15.265787 containerd[1543]: time="2025-09-05T00:37:15.265695735Z" level=warning msg="cleaning up after shim disconnected" id=f56f3286532496fd1b3d2fa63491878b7ec663aa50934f617c412d5c8abaaef5 namespace=k8s.io Sep 5 00:37:15.265787 containerd[1543]: time="2025-09-05T00:37:15.265706535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:37:16.072627 kubelet[2634]: E0905 00:37:16.072571 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lpmx" podUID="b629d959-9ce0-4662-8a0e-a74c6f7f28b5" Sep 5 00:37:16.153345 containerd[1543]: time="2025-09-05T00:37:16.153241338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:37:16.178991 kubelet[2634]: I0905 00:37:16.178928 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54c6cfc65c-hx2xb" podStartSLOduration=3.748243848 podStartE2EDuration="6.178897138s" podCreationTimestamp="2025-09-05 00:37:10 +0000 UTC" firstStartedPulling="2025-09-05 00:37:10.758456141 +0000 UTC m=+16.790129869" lastFinishedPulling="2025-09-05 00:37:13.189109431 +0000 UTC m=+19.220783159" observedRunningTime="2025-09-05 00:37:14.150436364 +0000 UTC m=+20.182110052" watchObservedRunningTime="2025-09-05 00:37:16.178897138 +0000 UTC m=+22.210570866" Sep 5 00:37:18.072585 kubelet[2634]: E0905 00:37:18.072532 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lpmx" podUID="b629d959-9ce0-4662-8a0e-a74c6f7f28b5" Sep 5 00:37:18.338559 containerd[1543]: time="2025-09-05T00:37:18.338453938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:18.339664 containerd[1543]: time="2025-09-05T00:37:18.339216295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 00:37:18.340014 containerd[1543]: time="2025-09-05T00:37:18.339964652Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:18.342675 containerd[1543]: time="2025-09-05T00:37:18.342495041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:18.343188 containerd[1543]: time="2025-09-05T00:37:18.343161438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.18987186s" Sep 5 00:37:18.343253 containerd[1543]: time="2025-09-05T00:37:18.343192558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 00:37:18.346790 containerd[1543]: time="2025-09-05T00:37:18.346680903Z" level=info msg="CreateContainer within sandbox \"fc33d6b234c06550a1bf65f1022258e76279ea75cf19cb4399b9970e51e3b068\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:37:18.360724 containerd[1543]: time="2025-09-05T00:37:18.360676803Z" level=info msg="CreateContainer within sandbox \"fc33d6b234c06550a1bf65f1022258e76279ea75cf19cb4399b9970e51e3b068\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f20a8c65de4ff4ed50011c80dd872c3499a0f097dc6384bf7cf7d4245225c713\"" Sep 5 00:37:18.362203 containerd[1543]: time="2025-09-05T00:37:18.361045602Z" level=info msg="StartContainer for \"f20a8c65de4ff4ed50011c80dd872c3499a0f097dc6384bf7cf7d4245225c713\"" Sep 5 00:37:18.420560 containerd[1543]: time="2025-09-05T00:37:18.420489268Z" level=info msg="StartContainer for \"f20a8c65de4ff4ed50011c80dd872c3499a0f097dc6384bf7cf7d4245225c713\" returns successfully" Sep 5 00:37:18.960409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f20a8c65de4ff4ed50011c80dd872c3499a0f097dc6384bf7cf7d4245225c713-rootfs.mount: Deactivated successfully. Sep 5 00:37:18.963635 containerd[1543]: time="2025-09-05T00:37:18.963584386Z" level=info msg="shim disconnected" id=f20a8c65de4ff4ed50011c80dd872c3499a0f097dc6384bf7cf7d4245225c713 namespace=k8s.io Sep 5 00:37:18.963635 containerd[1543]: time="2025-09-05T00:37:18.963628826Z" level=warning msg="cleaning up after shim disconnected" id=f20a8c65de4ff4ed50011c80dd872c3499a0f097dc6384bf7cf7d4245225c713 namespace=k8s.io Sep 5 00:37:18.963635 containerd[1543]: time="2025-09-05T00:37:18.963636946Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:37:18.969440 kubelet[2634]: I0905 00:37:18.968279 2634 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 00:37:19.035029 kubelet[2634]: I0905 00:37:19.034894 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1594e321-794f-47ae-a260-d32320cc168b-calico-apiserver-certs\") pod \"calico-apiserver-7d5cb679cb-vqrh4\" (UID: \"1594e321-794f-47ae-a260-d32320cc168b\") " pod="calico-apiserver/calico-apiserver-7d5cb679cb-vqrh4" Sep 5 00:37:19.035029 kubelet[2634]: I0905 00:37:19.035031 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zp5\" (UniqueName: \"kubernetes.io/projected/1594e321-794f-47ae-a260-d32320cc168b-kube-api-access-85zp5\") pod \"calico-apiserver-7d5cb679cb-vqrh4\" (UID: \"1594e321-794f-47ae-a260-d32320cc168b\") " pod="calico-apiserver/calico-apiserver-7d5cb679cb-vqrh4" Sep 5 00:37:19.035196 kubelet[2634]: I0905 00:37:19.035060 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smnmh\" (UniqueName: \"kubernetes.io/projected/b8bf72a2-a8ab-4666-8ec0-44f6b53426bd-kube-api-access-smnmh\") pod \"calico-apiserver-7d5cb679cb-llcvx\" (UID: \"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd\") " pod="calico-apiserver/calico-apiserver-7d5cb679cb-llcvx" Sep 5 00:37:19.035196 kubelet[2634]: I0905 00:37:19.035082 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hcb\" (UniqueName: \"kubernetes.io/projected/92604847-f3c6-4b7c-9399-1aa229b56af1-kube-api-access-r4hcb\") pod \"calico-kube-controllers-788bb96d6f-s5fmw\" (UID: \"92604847-f3c6-4b7c-9399-1aa229b56af1\") " pod="calico-system/calico-kube-controllers-788bb96d6f-s5fmw" Sep 5 00:37:19.035196 kubelet[2634]: I0905 00:37:19.035099 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqwxx\" (UniqueName: \"kubernetes.io/projected/ba02548f-dd9a-487d-9c64-7235820cae6f-kube-api-access-mqwxx\") pod \"coredns-7c65d6cfc9-qd2ch\" (UID: \"ba02548f-dd9a-487d-9c64-7235820cae6f\") " pod="kube-system/coredns-7c65d6cfc9-qd2ch" Sep 5 00:37:19.035196 kubelet[2634]: I0905 00:37:19.035115 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2kr2\" (UniqueName: \"kubernetes.io/projected/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-kube-api-access-p2kr2\") pod \"whisker-6f54597f5f-zzxvr\" (UID: \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\") " pod="calico-system/whisker-6f54597f5f-zzxvr" Sep 5 00:37:19.035196 kubelet[2634]: I0905 00:37:19.035131 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cdc98cd-288c-46fa-8622-2d2669974b33-config-volume\") pod \"coredns-7c65d6cfc9-n6pct\" (UID: \"2cdc98cd-288c-46fa-8622-2d2669974b33\") " pod="kube-system/coredns-7c65d6cfc9-n6pct" Sep 5 00:37:19.035318 kubelet[2634]: I0905 00:37:19.035147 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3c0cb5b9-141e-499d-8a34-a9fa818a010c-goldmane-key-pair\") pod \"goldmane-7988f88666-5d4st\" (UID: \"3c0cb5b9-141e-499d-8a34-a9fa818a010c\") " pod="calico-system/goldmane-7988f88666-5d4st" Sep 5 00:37:19.035318 kubelet[2634]: I0905 00:37:19.035163 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0cb5b9-141e-499d-8a34-a9fa818a010c-config\") pod \"goldmane-7988f88666-5d4st\" (UID: \"3c0cb5b9-141e-499d-8a34-a9fa818a010c\") " pod="calico-system/goldmane-7988f88666-5d4st" Sep 5 00:37:19.035318 kubelet[2634]: I0905 00:37:19.035177 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c0cb5b9-141e-499d-8a34-a9fa818a010c-goldmane-ca-bundle\") pod \"goldmane-7988f88666-5d4st\" (UID: \"3c0cb5b9-141e-499d-8a34-a9fa818a010c\") " pod="calico-system/goldmane-7988f88666-5d4st" Sep 5 00:37:19.035318 kubelet[2634]: I0905 00:37:19.035192 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-backend-key-pair\") pod \"whisker-6f54597f5f-zzxvr\" (UID: \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\") " pod="calico-system/whisker-6f54597f5f-zzxvr" Sep 5 00:37:19.035318 kubelet[2634]: I0905 00:37:19.035206 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-ca-bundle\") pod \"whisker-6f54597f5f-zzxvr\" (UID: \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\") " pod="calico-system/whisker-6f54597f5f-zzxvr" Sep 5 00:37:19.035427 kubelet[2634]: I0905 00:37:19.035232 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8bf72a2-a8ab-4666-8ec0-44f6b53426bd-calico-apiserver-certs\") pod \"calico-apiserver-7d5cb679cb-llcvx\" (UID: \"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd\") " pod="calico-apiserver/calico-apiserver-7d5cb679cb-llcvx" Sep 5 00:37:19.035427 kubelet[2634]: I0905 00:37:19.035249 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba02548f-dd9a-487d-9c64-7235820cae6f-config-volume\") pod \"coredns-7c65d6cfc9-qd2ch\" (UID: \"ba02548f-dd9a-487d-9c64-7235820cae6f\") " pod="kube-system/coredns-7c65d6cfc9-qd2ch" Sep 5 00:37:19.035427 kubelet[2634]: I0905 00:37:19.035266 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6lk\" (UniqueName: \"kubernetes.io/projected/3c0cb5b9-141e-499d-8a34-a9fa818a010c-kube-api-access-4g6lk\") pod \"goldmane-7988f88666-5d4st\" (UID: \"3c0cb5b9-141e-499d-8a34-a9fa818a010c\") " pod="calico-system/goldmane-7988f88666-5d4st" Sep 5 00:37:19.035427 kubelet[2634]: I0905 00:37:19.035290 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92604847-f3c6-4b7c-9399-1aa229b56af1-tigera-ca-bundle\") pod \"calico-kube-controllers-788bb96d6f-s5fmw\" (UID: \"92604847-f3c6-4b7c-9399-1aa229b56af1\") " pod="calico-system/calico-kube-controllers-788bb96d6f-s5fmw" Sep 5 00:37:19.035427 kubelet[2634]: I0905 00:37:19.035308 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmkd\" (UniqueName: \"kubernetes.io/projected/2cdc98cd-288c-46fa-8622-2d2669974b33-kube-api-access-bzmkd\") pod \"coredns-7c65d6cfc9-n6pct\" (UID: \"2cdc98cd-288c-46fa-8622-2d2669974b33\") " pod="kube-system/coredns-7c65d6cfc9-n6pct" Sep 5 00:37:19.159393 containerd[1543]: time="2025-09-05T00:37:19.159333939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:37:19.308939 kubelet[2634]: E0905 00:37:19.308255 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:19.309257 containerd[1543]: time="2025-09-05T00:37:19.308986487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd2ch,Uid:ba02548f-dd9a-487d-9c64-7235820cae6f,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:19.312463 containerd[1543]: time="2025-09-05T00:37:19.312432393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-vqrh4,Uid:1594e321-794f-47ae-a260-d32320cc168b,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:37:19.341416 kubelet[2634]: E0905 00:37:19.341073 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:19.342603 containerd[1543]: time="2025-09-05T00:37:19.341638913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-llcvx,Uid:b8bf72a2-a8ab-4666-8ec0-44f6b53426bd,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:37:19.342603 containerd[1543]: time="2025-09-05T00:37:19.341875912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n6pct,Uid:2cdc98cd-288c-46fa-8622-2d2669974b33,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:19.346484 containerd[1543]: time="2025-09-05T00:37:19.346093895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5d4st,Uid:3c0cb5b9-141e-499d-8a34-a9fa818a010c,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:19.346484 containerd[1543]: time="2025-09-05T00:37:19.346252694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f54597f5f-zzxvr,Uid:40b66138-bf6a-409e-a1b2-a4d0a1c088cb,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:19.347979 containerd[1543]: time="2025-09-05T00:37:19.347954527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788bb96d6f-s5fmw,Uid:92604847-f3c6-4b7c-9399-1aa229b56af1,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:19.468369 containerd[1543]: time="2025-09-05T00:37:19.468280475Z" level=error msg="Failed to destroy network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.468712 containerd[1543]: time="2025-09-05T00:37:19.468650313Z" level=error msg="encountered an error cleaning up failed sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.469418 containerd[1543]: time="2025-09-05T00:37:19.468950432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-vqrh4,Uid:1594e321-794f-47ae-a260-d32320cc168b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.471456 containerd[1543]: time="2025-09-05T00:37:19.471406062Z" level=error msg="Failed to destroy network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.472537 kubelet[2634]: E0905 00:37:19.472493 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.473181 containerd[1543]: time="2025-09-05T00:37:19.473142375Z" level=error msg="encountered an error cleaning up failed sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.473326 containerd[1543]: time="2025-09-05T00:37:19.473203815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd2ch,Uid:ba02548f-dd9a-487d-9c64-7235820cae6f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.473992 kubelet[2634]: E0905 00:37:19.473958 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.475624 kubelet[2634]: E0905 00:37:19.475152 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qd2ch" Sep 5 00:37:19.475624 kubelet[2634]: E0905 00:37:19.475197 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qd2ch" Sep 5 00:37:19.475624 kubelet[2634]: E0905 00:37:19.475272 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qd2ch_kube-system(ba02548f-dd9a-487d-9c64-7235820cae6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qd2ch_kube-system(ba02548f-dd9a-487d-9c64-7235820cae6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qd2ch" podUID="ba02548f-dd9a-487d-9c64-7235820cae6f" Sep 5 00:37:19.478283 kubelet[2634]: E0905 00:37:19.478157 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5cb679cb-vqrh4" Sep 5 00:37:19.478283 kubelet[2634]: E0905 00:37:19.478198 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5cb679cb-vqrh4" Sep 5 00:37:19.478283 kubelet[2634]: E0905 00:37:19.478243 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d5cb679cb-vqrh4_calico-apiserver(1594e321-794f-47ae-a260-d32320cc168b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d5cb679cb-vqrh4_calico-apiserver(1594e321-794f-47ae-a260-d32320cc168b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5cb679cb-vqrh4" podUID="1594e321-794f-47ae-a260-d32320cc168b" Sep 5 00:37:19.511374 containerd[1543]: time="2025-09-05T00:37:19.511304459Z" level=error msg="Failed to destroy network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.512169 containerd[1543]: time="2025-09-05T00:37:19.512118736Z" level=error msg="encountered an error cleaning up failed sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.512243 containerd[1543]: time="2025-09-05T00:37:19.512183215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-llcvx,Uid:b8bf72a2-a8ab-4666-8ec0-44f6b53426bd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.512545 kubelet[2634]: E0905 00:37:19.512488 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.512960 kubelet[2634]: E0905 00:37:19.512874 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5cb679cb-llcvx" Sep 5 00:37:19.512960 kubelet[2634]: E0905 00:37:19.512931 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d5cb679cb-llcvx" Sep 5 00:37:19.513940 kubelet[2634]: E0905 00:37:19.513089 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d5cb679cb-llcvx_calico-apiserver(b8bf72a2-a8ab-4666-8ec0-44f6b53426bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d5cb679cb-llcvx_calico-apiserver(b8bf72a2-a8ab-4666-8ec0-44f6b53426bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5cb679cb-llcvx" podUID="b8bf72a2-a8ab-4666-8ec0-44f6b53426bd" Sep 5 00:37:19.529172 containerd[1543]: time="2025-09-05T00:37:19.529113026Z" level=error msg="Failed to destroy network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.531495 containerd[1543]: time="2025-09-05T00:37:19.531434217Z" level=error msg="Failed to destroy network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.531965 containerd[1543]: time="2025-09-05T00:37:19.531435497Z" level=error msg="encountered an error cleaning up failed sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.532102 containerd[1543]: time="2025-09-05T00:37:19.532075014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f54597f5f-zzxvr,Uid:40b66138-bf6a-409e-a1b2-a4d0a1c088cb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.532300 containerd[1543]: time="2025-09-05T00:37:19.532110294Z" level=error msg="encountered an error cleaning up failed sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.532357 containerd[1543]: time="2025-09-05T00:37:19.532313293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788bb96d6f-s5fmw,Uid:92604847-f3c6-4b7c-9399-1aa229b56af1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.532544 kubelet[2634]: E0905 00:37:19.532510 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.533178 kubelet[2634]: E0905 00:37:19.532681 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.533178 kubelet[2634]: E0905 00:37:19.533108 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788bb96d6f-s5fmw" Sep 5 00:37:19.533178 kubelet[2634]: E0905 00:37:19.533131 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788bb96d6f-s5fmw" Sep 5 00:37:19.533311 kubelet[2634]: E0905 00:37:19.533188 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-788bb96d6f-s5fmw_calico-system(92604847-f3c6-4b7c-9399-1aa229b56af1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-788bb96d6f-s5fmw_calico-system(92604847-f3c6-4b7c-9399-1aa229b56af1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788bb96d6f-s5fmw" podUID="92604847-f3c6-4b7c-9399-1aa229b56af1" Sep 5 00:37:19.533610 kubelet[2634]: E0905 00:37:19.533396 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f54597f5f-zzxvr" Sep 5 00:37:19.533610 kubelet[2634]: E0905 00:37:19.533425 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f54597f5f-zzxvr" Sep 5 00:37:19.533610 kubelet[2634]: E0905 00:37:19.533458 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f54597f5f-zzxvr_calico-system(40b66138-bf6a-409e-a1b2-a4d0a1c088cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f54597f5f-zzxvr_calico-system(40b66138-bf6a-409e-a1b2-a4d0a1c088cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f54597f5f-zzxvr" podUID="40b66138-bf6a-409e-a1b2-a4d0a1c088cb" Sep 5 00:37:19.539122 containerd[1543]: time="2025-09-05T00:37:19.539090345Z" level=error msg="Failed to destroy network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.539678 containerd[1543]: time="2025-09-05T00:37:19.539647743Z" level=error msg="encountered an error cleaning up failed sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.539788 containerd[1543]: time="2025-09-05T00:37:19.539767783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5d4st,Uid:3c0cb5b9-141e-499d-8a34-a9fa818a010c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.540314 kubelet[2634]: E0905 00:37:19.540019 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.540314 kubelet[2634]: E0905 00:37:19.540085 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-5d4st" Sep 5 00:37:19.540314 kubelet[2634]: E0905 00:37:19.540101 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-5d4st" Sep 5 00:37:19.540438 kubelet[2634]: E0905 00:37:19.540142 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-5d4st_calico-system(3c0cb5b9-141e-499d-8a34-a9fa818a010c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-5d4st_calico-system(3c0cb5b9-141e-499d-8a34-a9fa818a010c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-5d4st" podUID="3c0cb5b9-141e-499d-8a34-a9fa818a010c" Sep 5 00:37:19.557357 containerd[1543]: time="2025-09-05T00:37:19.557320151Z" level=error msg="Failed to destroy network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.557638 containerd[1543]: time="2025-09-05T00:37:19.557602030Z" level=error msg="encountered an error cleaning up failed sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.557685 containerd[1543]: time="2025-09-05T00:37:19.557647989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n6pct,Uid:2cdc98cd-288c-46fa-8622-2d2669974b33,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.557849 kubelet[2634]: E0905 00:37:19.557816 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:19.558184 kubelet[2634]: E0905 00:37:19.557961 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n6pct" Sep 5 00:37:19.558184 kubelet[2634]: E0905 00:37:19.557984 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n6pct" Sep 5 00:37:19.558184 kubelet[2634]: E0905 00:37:19.558018 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-n6pct_kube-system(2cdc98cd-288c-46fa-8622-2d2669974b33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-n6pct_kube-system(2cdc98cd-288c-46fa-8622-2d2669974b33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n6pct" podUID="2cdc98cd-288c-46fa-8622-2d2669974b33" Sep 5 00:37:20.082724 containerd[1543]: time="2025-09-05T00:37:20.080116906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lpmx,Uid:b629d959-9ce0-4662-8a0e-a74c6f7f28b5,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:20.150733 containerd[1543]: time="2025-09-05T00:37:20.150690549Z" level=error msg="Failed to destroy network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.151008 containerd[1543]: time="2025-09-05T00:37:20.150979308Z" level=error msg="encountered an error cleaning up failed sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.151062 containerd[1543]: time="2025-09-05T00:37:20.151025068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lpmx,Uid:b629d959-9ce0-4662-8a0e-a74c6f7f28b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.151296 kubelet[2634]: E0905 00:37:20.151256 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.151353 kubelet[2634]: E0905 00:37:20.151313 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8lpmx" Sep 5 00:37:20.151353 kubelet[2634]: E0905 00:37:20.151331 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8lpmx" Sep 5 00:37:20.151413 kubelet[2634]: E0905 00:37:20.151376 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8lpmx_calico-system(b629d959-9ce0-4662-8a0e-a74c6f7f28b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8lpmx_calico-system(b629d959-9ce0-4662-8a0e-a74c6f7f28b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8lpmx" podUID="b629d959-9ce0-4662-8a0e-a74c6f7f28b5" Sep 5 00:37:20.160097 kubelet[2634]: I0905 00:37:20.160072 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:20.161610 containerd[1543]: time="2025-09-05T00:37:20.160702030Z" level=info msg="StopPodSandbox for \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\"" Sep 5 00:37:20.161610 containerd[1543]: time="2025-09-05T00:37:20.160846990Z" level=info msg="Ensure that sandbox c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86 in task-service has been cleanup successfully" Sep 5 00:37:20.162382 kubelet[2634]: I0905 00:37:20.162365 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:20.162883 containerd[1543]: time="2025-09-05T00:37:20.162855582Z" level=info msg="StopPodSandbox for \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\"" Sep 5 00:37:20.164035 containerd[1543]: time="2025-09-05T00:37:20.163005821Z" level=info msg="Ensure that sandbox 4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede in task-service has been cleanup successfully" Sep 5 00:37:20.166159 kubelet[2634]: I0905 00:37:20.164761 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:20.166232 containerd[1543]: time="2025-09-05T00:37:20.165580931Z" level=info msg="StopPodSandbox for \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\"" Sep 5 00:37:20.166232 containerd[1543]: time="2025-09-05T00:37:20.165752570Z" level=info msg="Ensure that sandbox 4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5 in task-service has been cleanup successfully" Sep 5 00:37:20.180533 kubelet[2634]: I0905 00:37:20.180345 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:20.181874 kubelet[2634]: I0905 00:37:20.181814 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:20.184353 kubelet[2634]: I0905 00:37:20.184304 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:20.186217 kubelet[2634]: I0905 00:37:20.186015 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:20.187053 kubelet[2634]: I0905 00:37:20.186990 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:20.192623 containerd[1543]: time="2025-09-05T00:37:20.192590385Z" level=info msg="StopPodSandbox for \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\"" Sep 5 00:37:20.193107 containerd[1543]: time="2025-09-05T00:37:20.192744705Z" level=info msg="Ensure that sandbox ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9 in task-service has been cleanup successfully" Sep 5 00:37:20.193168 containerd[1543]: time="2025-09-05T00:37:20.192981824Z" level=info msg="StopPodSandbox for \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\"" Sep 5 00:37:20.193283 containerd[1543]: time="2025-09-05T00:37:20.193259423Z" level=info msg="StopPodSandbox for \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\"" Sep 5 00:37:20.193437 containerd[1543]: time="2025-09-05T00:37:20.193383342Z" level=info msg="StopPodSandbox for \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\"" Sep 5 00:37:20.193557 containerd[1543]: time="2025-09-05T00:37:20.193536661Z" level=info msg="Ensure that sandbox 2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041 in task-service has been cleanup successfully" Sep 5 00:37:20.193801 containerd[1543]: time="2025-09-05T00:37:20.193765381Z" level=info msg="Ensure that sandbox e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd in task-service has been cleanup successfully" Sep 5 00:37:20.194353 containerd[1543]: time="2025-09-05T00:37:20.193707341Z" level=info msg="StopPodSandbox for \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\"" Sep 5 00:37:20.194498 containerd[1543]: time="2025-09-05T00:37:20.194456018Z" level=info msg="Ensure that sandbox 35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a in task-service has been cleanup successfully" Sep 5 00:37:20.196237 containerd[1543]: time="2025-09-05T00:37:20.195564733Z" level=info msg="Ensure that sandbox f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2 in task-service has been cleanup successfully" Sep 5 00:37:20.208958 containerd[1543]: time="2025-09-05T00:37:20.208920761Z" level=error msg="StopPodSandbox for \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\" failed" error="failed to destroy network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.209167 containerd[1543]: time="2025-09-05T00:37:20.208931721Z" level=error msg="StopPodSandbox for \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\" failed" error="failed to destroy network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.209361 kubelet[2634]: E0905 00:37:20.209329 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:20.209575 kubelet[2634]: E0905 00:37:20.209451 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86"} Sep 5 00:37:20.209575 kubelet[2634]: E0905 00:37:20.209514 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1594e321-794f-47ae-a260-d32320cc168b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.209575 kubelet[2634]: E0905 00:37:20.209535 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1594e321-794f-47ae-a260-d32320cc168b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5cb679cb-vqrh4" podUID="1594e321-794f-47ae-a260-d32320cc168b" Sep 5 00:37:20.209947 kubelet[2634]: E0905 00:37:20.209872 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:20.210005 kubelet[2634]: E0905 00:37:20.209950 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede"} Sep 5 00:37:20.210005 kubelet[2634]: E0905 00:37:20.209981 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba02548f-dd9a-487d-9c64-7235820cae6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.210005 kubelet[2634]: E0905 00:37:20.209998 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba02548f-dd9a-487d-9c64-7235820cae6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qd2ch" podUID="ba02548f-dd9a-487d-9c64-7235820cae6f" Sep 5 00:37:20.231619 containerd[1543]: time="2025-09-05T00:37:20.231485713Z" level=error msg="StopPodSandbox for \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\" failed" error="failed to destroy network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.232122 kubelet[2634]: E0905 00:37:20.231780 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:20.232122 kubelet[2634]: E0905 00:37:20.231832 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041"} Sep 5 00:37:20.232122 kubelet[2634]: E0905 00:37:20.231864 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92604847-f3c6-4b7c-9399-1aa229b56af1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.232122 kubelet[2634]: E0905 00:37:20.231883 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92604847-f3c6-4b7c-9399-1aa229b56af1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788bb96d6f-s5fmw" podUID="92604847-f3c6-4b7c-9399-1aa229b56af1" Sep 5 00:37:20.240011 containerd[1543]: time="2025-09-05T00:37:20.239965759Z" level=error msg="StopPodSandbox for \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\" failed" error="failed to destroy network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.240936 kubelet[2634]: E0905 00:37:20.240251 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:20.240936 kubelet[2634]: E0905 00:37:20.240299 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2"} Sep 5 00:37:20.240936 kubelet[2634]: E0905 00:37:20.240331 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cdc98cd-288c-46fa-8622-2d2669974b33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.240936 kubelet[2634]: E0905 00:37:20.240349 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cdc98cd-288c-46fa-8622-2d2669974b33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n6pct" podUID="2cdc98cd-288c-46fa-8622-2d2669974b33" Sep 5 00:37:20.244941 containerd[1543]: time="2025-09-05T00:37:20.244878060Z" level=error msg="StopPodSandbox for \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\" failed" error="failed to destroy network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.245102 kubelet[2634]: E0905 00:37:20.245071 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:20.245159 kubelet[2634]: E0905 00:37:20.245109 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a"} Sep 5 00:37:20.245159 kubelet[2634]: E0905 00:37:20.245135 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.245231 kubelet[2634]: E0905 00:37:20.245154 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d5cb679cb-llcvx" podUID="b8bf72a2-a8ab-4666-8ec0-44f6b53426bd" Sep 5 00:37:20.248490 containerd[1543]: time="2025-09-05T00:37:20.248451126Z" level=error msg="StopPodSandbox for \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\" failed" error="failed to destroy network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.248726 kubelet[2634]: E0905 00:37:20.248635 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:20.248726 kubelet[2634]: E0905 00:37:20.248674 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5"} Sep 5 00:37:20.248726 kubelet[2634]: E0905 00:37:20.248697 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.248726 kubelet[2634]: E0905 00:37:20.248714 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f54597f5f-zzxvr" podUID="40b66138-bf6a-409e-a1b2-a4d0a1c088cb" Sep 5 00:37:20.253860 containerd[1543]: time="2025-09-05T00:37:20.253822545Z" level=error msg="StopPodSandbox for \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\" failed" error="failed to destroy network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.254396 kubelet[2634]: E0905 00:37:20.253980 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:20.254396 kubelet[2634]: E0905 00:37:20.254012 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd"} Sep 5 00:37:20.254396 kubelet[2634]: E0905 00:37:20.254045 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b629d959-9ce0-4662-8a0e-a74c6f7f28b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.254396 kubelet[2634]: E0905 00:37:20.254066 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b629d959-9ce0-4662-8a0e-a74c6f7f28b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8lpmx" podUID="b629d959-9ce0-4662-8a0e-a74c6f7f28b5" Sep 5 00:37:20.256009 containerd[1543]: time="2025-09-05T00:37:20.255973377Z" level=error msg="StopPodSandbox for \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\" failed" error="failed to destroy network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:37:20.256170 kubelet[2634]: E0905 00:37:20.256145 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:20.256211 kubelet[2634]: E0905 00:37:20.256175 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9"} Sep 5 00:37:20.256211 kubelet[2634]: E0905 00:37:20.256195 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c0cb5b9-141e-499d-8a34-a9fa818a010c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:37:20.256295 kubelet[2634]: E0905 00:37:20.256212 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c0cb5b9-141e-499d-8a34-a9fa818a010c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-5d4st" podUID="3c0cb5b9-141e-499d-8a34-a9fa818a010c" Sep 5 00:37:20.368483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9-shm.mount: Deactivated successfully. Sep 5 00:37:20.368609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a-shm.mount: Deactivated successfully. Sep 5 00:37:20.368698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede-shm.mount: Deactivated successfully. Sep 5 00:37:20.368776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86-shm.mount: Deactivated successfully. Sep 5 00:37:29.998632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695298877.mount: Deactivated successfully. Sep 5 00:37:30.250665 containerd[1543]: time="2025-09-05T00:37:30.250538097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:30.251148 containerd[1543]: time="2025-09-05T00:37:30.251112655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 00:37:30.251916 containerd[1543]: time="2025-09-05T00:37:30.251842613Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:30.253947 containerd[1543]: time="2025-09-05T00:37:30.253471809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:30.254644 containerd[1543]: time="2025-09-05T00:37:30.254620446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 11.095246227s" Sep 5 00:37:30.254686 containerd[1543]: time="2025-09-05T00:37:30.254650846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 00:37:30.265088 containerd[1543]: time="2025-09-05T00:37:30.264989538Z" level=info msg="CreateContainer within sandbox \"fc33d6b234c06550a1bf65f1022258e76279ea75cf19cb4399b9970e51e3b068\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:37:30.282008 containerd[1543]: time="2025-09-05T00:37:30.281964812Z" level=info msg="CreateContainer within sandbox \"fc33d6b234c06550a1bf65f1022258e76279ea75cf19cb4399b9970e51e3b068\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7c234b008c6d38b9428331348f24a3b5dba813895439f79190cf6364b11f951\"" Sep 5 00:37:30.284839 containerd[1543]: time="2025-09-05T00:37:30.282942449Z" level=info msg="StartContainer for \"b7c234b008c6d38b9428331348f24a3b5dba813895439f79190cf6364b11f951\"" Sep 5 00:37:30.437740 containerd[1543]: time="2025-09-05T00:37:30.437582033Z" level=info msg="StartContainer for \"b7c234b008c6d38b9428331348f24a3b5dba813895439f79190cf6364b11f951\" returns successfully" Sep 5 00:37:30.483120 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:37:30.483259 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:37:30.605095 containerd[1543]: time="2025-09-05T00:37:30.604527944Z" level=info msg="StopPodSandbox for \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\"" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.678 [INFO][4019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.680 [INFO][4019] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" iface="eth0" netns="/var/run/netns/cni-11fa4b05-32ae-6ff3-11ef-4c93c31d73ea" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.681 [INFO][4019] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" iface="eth0" netns="/var/run/netns/cni-11fa4b05-32ae-6ff3-11ef-4c93c31d73ea" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.682 [INFO][4019] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" iface="eth0" netns="/var/run/netns/cni-11fa4b05-32ae-6ff3-11ef-4c93c31d73ea" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.682 [INFO][4019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.682 [INFO][4019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.754 [INFO][4030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.754 [INFO][4030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.754 [INFO][4030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.764 [WARNING][4030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.764 [INFO][4030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.765 [INFO][4030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:30.769764 containerd[1543]: 2025-09-05 00:37:30.767 [INFO][4019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:30.769764 containerd[1543]: time="2025-09-05T00:37:30.769576580Z" level=info msg="TearDown network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\" successfully" Sep 5 00:37:30.769764 containerd[1543]: time="2025-09-05T00:37:30.769618780Z" level=info msg="StopPodSandbox for \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\" returns successfully" Sep 5 00:37:30.904869 kubelet[2634]: I0905 00:37:30.904830 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2kr2\" (UniqueName: \"kubernetes.io/projected/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-kube-api-access-p2kr2\") pod \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\" (UID: \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\") " Sep 5 00:37:30.905513 kubelet[2634]: I0905 00:37:30.904883 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-ca-bundle\") pod \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\" (UID: \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\") " Sep 5 00:37:30.905513 kubelet[2634]: I0905 00:37:30.904923 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-backend-key-pair\") pod \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\" (UID: \"40b66138-bf6a-409e-a1b2-a4d0a1c088cb\") " Sep 5 00:37:30.912072 kubelet[2634]: I0905 00:37:30.912035 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "40b66138-bf6a-409e-a1b2-a4d0a1c088cb" (UID: "40b66138-bf6a-409e-a1b2-a4d0a1c088cb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:37:30.914374 kubelet[2634]: I0905 00:37:30.914335 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-kube-api-access-p2kr2" (OuterVolumeSpecName: "kube-api-access-p2kr2") pod "40b66138-bf6a-409e-a1b2-a4d0a1c088cb" (UID: "40b66138-bf6a-409e-a1b2-a4d0a1c088cb"). InnerVolumeSpecName "kube-api-access-p2kr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:37:30.914488 kubelet[2634]: I0905 00:37:30.914461 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "40b66138-bf6a-409e-a1b2-a4d0a1c088cb" (UID: "40b66138-bf6a-409e-a1b2-a4d0a1c088cb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:37:30.999654 systemd[1]: run-netns-cni\x2d11fa4b05\x2d32ae\x2d6ff3\x2d11ef\x2d4c93c31d73ea.mount: Deactivated successfully. Sep 5 00:37:30.999797 systemd[1]: var-lib-kubelet-pods-40b66138\x2dbf6a\x2d409e\x2da1b2\x2da4d0a1c088cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2kr2.mount: Deactivated successfully. Sep 5 00:37:30.999899 systemd[1]: var-lib-kubelet-pods-40b66138\x2dbf6a\x2d409e\x2da1b2\x2da4d0a1c088cb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:37:31.005617 kubelet[2634]: I0905 00:37:31.005571 2634 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2kr2\" (UniqueName: \"kubernetes.io/projected/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-kube-api-access-p2kr2\") on node \"localhost\" DevicePath \"\"" Sep 5 00:37:31.005617 kubelet[2634]: I0905 00:37:31.005604 2634 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:37:31.005617 kubelet[2634]: I0905 00:37:31.005613 2634 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40b66138-bf6a-409e-a1b2-a4d0a1c088cb-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:37:31.254841 kubelet[2634]: I0905 00:37:31.254711 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tbthj" podStartSLOduration=1.901084657 podStartE2EDuration="21.254694817s" podCreationTimestamp="2025-09-05 00:37:10 +0000 UTC" firstStartedPulling="2025-09-05 00:37:10.901632884 +0000 UTC m=+16.933306572" lastFinishedPulling="2025-09-05 00:37:30.255243004 +0000 UTC m=+36.286916732" observedRunningTime="2025-09-05 00:37:31.248753752 +0000 UTC m=+37.280427480" watchObservedRunningTime="2025-09-05 00:37:31.254694817 +0000 UTC m=+37.286368545" Sep 5 00:37:31.408004 kubelet[2634]: I0905 00:37:31.407961 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06de525c-f5bd-4996-b512-29d9885a5469-whisker-ca-bundle\") pod \"whisker-7477dc8b74-gzlgn\" (UID: \"06de525c-f5bd-4996-b512-29d9885a5469\") " pod="calico-system/whisker-7477dc8b74-gzlgn" Sep 5 00:37:31.408124 kubelet[2634]: I0905 00:37:31.408027 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06de525c-f5bd-4996-b512-29d9885a5469-whisker-backend-key-pair\") pod \"whisker-7477dc8b74-gzlgn\" (UID: \"06de525c-f5bd-4996-b512-29d9885a5469\") " pod="calico-system/whisker-7477dc8b74-gzlgn" Sep 5 00:37:31.408124 kubelet[2634]: I0905 00:37:31.408066 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvmh\" (UniqueName: \"kubernetes.io/projected/06de525c-f5bd-4996-b512-29d9885a5469-kube-api-access-4jvmh\") pod \"whisker-7477dc8b74-gzlgn\" (UID: \"06de525c-f5bd-4996-b512-29d9885a5469\") " pod="calico-system/whisker-7477dc8b74-gzlgn" Sep 5 00:37:31.563168 containerd[1543]: time="2025-09-05T00:37:31.563012293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7477dc8b74-gzlgn,Uid:06de525c-f5bd-4996-b512-29d9885a5469,Namespace:calico-system,Attempt:0,}" Sep 5 00:37:31.667149 systemd-networkd[1225]: cali90ba76dc84e: Link UP Sep 5 00:37:31.667339 systemd-networkd[1225]: cali90ba76dc84e: Gained carrier Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.589 [INFO][4054] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.602 [INFO][4054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7477dc8b74--gzlgn-eth0 whisker-7477dc8b74- calico-system 06de525c-f5bd-4996-b512-29d9885a5469 895 0 2025-09-05 00:37:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7477dc8b74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7477dc8b74-gzlgn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali90ba76dc84e [] [] }} ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.603 [INFO][4054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.624 [INFO][4068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" HandleID="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Workload="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.624 [INFO][4068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" HandleID="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Workload="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7477dc8b74-gzlgn", "timestamp":"2025-09-05 00:37:31.624257494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.624 [INFO][4068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.624 [INFO][4068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.624 [INFO][4068] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.633 [INFO][4068] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.638 [INFO][4068] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.646 [INFO][4068] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.648 [INFO][4068] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.650 [INFO][4068] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.650 [INFO][4068] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.651 [INFO][4068] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.655 [INFO][4068] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.659 [INFO][4068] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.659 [INFO][4068] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" host="localhost" Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.659 [INFO][4068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:31.679945 containerd[1543]: 2025-09-05 00:37:31.659 [INFO][4068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" HandleID="k8s-pod-network.3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Workload="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" Sep 5 00:37:31.680520 containerd[1543]: 2025-09-05 00:37:31.661 [INFO][4054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7477dc8b74--gzlgn-eth0", GenerateName:"whisker-7477dc8b74-", Namespace:"calico-system", SelfLink:"", UID:"06de525c-f5bd-4996-b512-29d9885a5469", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7477dc8b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7477dc8b74-gzlgn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90ba76dc84e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:31.680520 containerd[1543]: 2025-09-05 00:37:31.661 [INFO][4054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" Sep 5 00:37:31.680520 containerd[1543]: 2025-09-05 00:37:31.661 [INFO][4054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90ba76dc84e ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" Sep 5 00:37:31.680520 containerd[1543]: 2025-09-05 00:37:31.668 [INFO][4054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" Sep 5 00:37:31.680520 containerd[1543]: 2025-09-05 00:37:31.670 [INFO][4054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7477dc8b74--gzlgn-eth0", GenerateName:"whisker-7477dc8b74-", Namespace:"calico-system", SelfLink:"", UID:"06de525c-f5bd-4996-b512-29d9885a5469", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7477dc8b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d", Pod:"whisker-7477dc8b74-gzlgn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90ba76dc84e", MAC:"0e:f8:e2:66:86:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:31.680520 containerd[1543]: 2025-09-05 00:37:31.677 [INFO][4054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d" Namespace="calico-system" Pod="whisker-7477dc8b74-gzlgn" WorkloadEndpoint="localhost-k8s-whisker--7477dc8b74--gzlgn-eth0" Sep 5 00:37:31.696155 containerd[1543]: time="2025-09-05T00:37:31.695924627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:31.696155 containerd[1543]: time="2025-09-05T00:37:31.695972587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:31.696155 containerd[1543]: time="2025-09-05T00:37:31.695988027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:31.696155 containerd[1543]: time="2025-09-05T00:37:31.696065307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:31.727938 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:31.743201 containerd[1543]: time="2025-09-05T00:37:31.743161704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7477dc8b74-gzlgn,Uid:06de525c-f5bd-4996-b512-29d9885a5469,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d\"" Sep 5 00:37:31.744473 containerd[1543]: time="2025-09-05T00:37:31.744293141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:37:32.073072 containerd[1543]: time="2025-09-05T00:37:32.073024290Z" level=info msg="StopPodSandbox for \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\"" Sep 5 00:37:32.073725 containerd[1543]: time="2025-09-05T00:37:32.073315850Z" level=info msg="StopPodSandbox for \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\"" Sep 5 00:37:32.074893 kubelet[2634]: I0905 00:37:32.074739 2634 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b66138-bf6a-409e-a1b2-a4d0a1c088cb" path="/var/lib/kubelet/pods/40b66138-bf6a-409e-a1b2-a4d0a1c088cb/volumes" Sep 5 00:37:32.078392 containerd[1543]: time="2025-09-05T00:37:32.078146997Z" level=info msg="StopPodSandbox for \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\"" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.123 [INFO][4268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.123 [INFO][4268] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" iface="eth0" netns="/var/run/netns/cni-7d9b9df7-403a-1fad-0288-b608fcee11ff" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.124 [INFO][4268] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" iface="eth0" netns="/var/run/netns/cni-7d9b9df7-403a-1fad-0288-b608fcee11ff" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.124 [INFO][4268] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" iface="eth0" netns="/var/run/netns/cni-7d9b9df7-403a-1fad-0288-b608fcee11ff" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.124 [INFO][4268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.124 [INFO][4268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.148 [INFO][4284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.148 [INFO][4284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.148 [INFO][4284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.158 [WARNING][4284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.158 [INFO][4284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.163 [INFO][4284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:32.170766 containerd[1543]: 2025-09-05 00:37:32.167 [INFO][4268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:32.171865 containerd[1543]: time="2025-09-05T00:37:32.171256242Z" level=info msg="TearDown network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\" successfully" Sep 5 00:37:32.171865 containerd[1543]: time="2025-09-05T00:37:32.171287842Z" level=info msg="StopPodSandbox for \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\" returns successfully" Sep 5 00:37:32.172437 containerd[1543]: time="2025-09-05T00:37:32.172408439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788bb96d6f-s5fmw,Uid:92604847-f3c6-4b7c-9399-1aa229b56af1,Namespace:calico-system,Attempt:1,}" Sep 5 00:37:32.173917 systemd[1]: run-netns-cni\x2d7d9b9df7\x2d403a\x2d1fad\x2d0288\x2db608fcee11ff.mount: Deactivated successfully. Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.130 [INFO][4242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.130 [INFO][4242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" iface="eth0" netns="/var/run/netns/cni-5f9123d4-a6a3-c5ed-24f1-b7ed2de308b2" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.131 [INFO][4242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" iface="eth0" netns="/var/run/netns/cni-5f9123d4-a6a3-c5ed-24f1-b7ed2de308b2" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.131 [INFO][4242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" iface="eth0" netns="/var/run/netns/cni-5f9123d4-a6a3-c5ed-24f1-b7ed2de308b2" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.131 [INFO][4242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.131 [INFO][4242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.152 [INFO][4290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.152 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.163 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.175 [WARNING][4290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.175 [INFO][4290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.177 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:32.185004 containerd[1543]: 2025-09-05 00:37:32.179 [INFO][4242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:32.185004 containerd[1543]: time="2025-09-05T00:37:32.184439289Z" level=info msg="TearDown network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\" successfully" Sep 5 00:37:32.185004 containerd[1543]: time="2025-09-05T00:37:32.184460689Z" level=info msg="StopPodSandbox for \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\" returns successfully" Sep 5 00:37:32.186110 containerd[1543]: time="2025-09-05T00:37:32.186079285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-llcvx,Uid:b8bf72a2-a8ab-4666-8ec0-44f6b53426bd,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:37:32.187187 systemd[1]: run-netns-cni\x2d5f9123d4\x2da6a3\x2dc5ed\x2d24f1\x2db7ed2de308b2.mount: Deactivated successfully. Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.138 [INFO][4258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.138 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" iface="eth0" netns="/var/run/netns/cni-ce4cd962-430f-8edd-ba94-6ce295ba29ad" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.138 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" iface="eth0" netns="/var/run/netns/cni-ce4cd962-430f-8edd-ba94-6ce295ba29ad" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.138 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" iface="eth0" netns="/var/run/netns/cni-ce4cd962-430f-8edd-ba94-6ce295ba29ad" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.139 [INFO][4258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.139 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.164 [INFO][4298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.165 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.177 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.188 [WARNING][4298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.188 [INFO][4298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.190 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:32.193521 containerd[1543]: 2025-09-05 00:37:32.191 [INFO][4258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:32.193919 containerd[1543]: time="2025-09-05T00:37:32.193764745Z" level=info msg="TearDown network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\" successfully" Sep 5 00:37:32.193919 containerd[1543]: time="2025-09-05T00:37:32.193786825Z" level=info msg="StopPodSandbox for \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\" returns successfully" Sep 5 00:37:32.194657 kubelet[2634]: E0905 00:37:32.194179 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:32.195485 containerd[1543]: time="2025-09-05T00:37:32.195405461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd2ch,Uid:ba02548f-dd9a-487d-9c64-7235820cae6f,Namespace:kube-system,Attempt:1,}" Sep 5 00:37:32.196140 systemd[1]: run-netns-cni\x2dce4cd962\x2d430f\x2d8edd\x2dba94\x2d6ce295ba29ad.mount: Deactivated successfully. Sep 5 00:37:32.222935 kubelet[2634]: I0905 00:37:32.221638 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:32.333372 systemd-networkd[1225]: cali82e19447ccd: Link UP Sep 5 00:37:32.334963 systemd-networkd[1225]: cali82e19447ccd: Gained carrier Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.245 [INFO][4322] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.260 [INFO][4322] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0 calico-apiserver-7d5cb679cb- calico-apiserver b8bf72a2-a8ab-4666-8ec0-44f6b53426bd 906 0 2025-09-05 00:37:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d5cb679cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d5cb679cb-llcvx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82e19447ccd [] [] }} ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.260 [INFO][4322] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.289 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" HandleID="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.289 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" HandleID="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004921f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d5cb679cb-llcvx", "timestamp":"2025-09-05 00:37:32.289330424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.289 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.289 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.289 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.300 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.304 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.312 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.314 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.317 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.317 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.319 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1 Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.323 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.329 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.329 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" host="localhost" Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.329 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:32.355199 containerd[1543]: 2025-09-05 00:37:32.329 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" HandleID="k8s-pod-network.0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.355767 containerd[1543]: 2025-09-05 00:37:32.331 [INFO][4322] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d5cb679cb-llcvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82e19447ccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:32.355767 containerd[1543]: 2025-09-05 00:37:32.331 [INFO][4322] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.355767 containerd[1543]: 2025-09-05 00:37:32.331 [INFO][4322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82e19447ccd ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.355767 containerd[1543]: 2025-09-05 00:37:32.334 [INFO][4322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.355767 containerd[1543]: 2025-09-05 00:37:32.334 [INFO][4322] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1", Pod:"calico-apiserver-7d5cb679cb-llcvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82e19447ccd", MAC:"9a:58:5d:5d:46:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:32.355767 containerd[1543]: 2025-09-05 00:37:32.353 [INFO][4322] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-llcvx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:32.372974 containerd[1543]: time="2025-09-05T00:37:32.370987658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:32.372974 containerd[1543]: time="2025-09-05T00:37:32.371050578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:32.372974 containerd[1543]: time="2025-09-05T00:37:32.371362817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:32.372974 containerd[1543]: time="2025-09-05T00:37:32.371754536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:32.413931 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:32.442453 containerd[1543]: time="2025-09-05T00:37:32.442205198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-llcvx,Uid:b8bf72a2-a8ab-4666-8ec0-44f6b53426bd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1\"" Sep 5 00:37:32.443374 systemd-networkd[1225]: calia038c45af76: Link UP Sep 5 00:37:32.444490 systemd-networkd[1225]: calia038c45af76: Gained carrier Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.231 [INFO][4310] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.246 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0 calico-kube-controllers-788bb96d6f- calico-system 92604847-f3c6-4b7c-9399-1aa229b56af1 905 0 2025-09-05 00:37:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:788bb96d6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-788bb96d6f-s5fmw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia038c45af76 [] [] }} ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.246 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.291 [INFO][4353] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" HandleID="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.292 [INFO][4353] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" HandleID="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000343070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-788bb96d6f-s5fmw", "timestamp":"2025-09-05 00:37:32.290699701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.292 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.329 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.329 [INFO][4353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.401 [INFO][4353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.408 [INFO][4353] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.412 [INFO][4353] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.414 [INFO][4353] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.417 [INFO][4353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.417 [INFO][4353] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.419 [INFO][4353] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966 Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.423 [INFO][4353] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.429 [INFO][4353] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.429 [INFO][4353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" host="localhost" Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.429 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:32.456823 containerd[1543]: 2025-09-05 00:37:32.429 [INFO][4353] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" HandleID="k8s-pod-network.55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.457482 containerd[1543]: 2025-09-05 00:37:32.433 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0", GenerateName:"calico-kube-controllers-788bb96d6f-", Namespace:"calico-system", SelfLink:"", UID:"92604847-f3c6-4b7c-9399-1aa229b56af1", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"788bb96d6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-788bb96d6f-s5fmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia038c45af76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:32.457482 containerd[1543]: 2025-09-05 00:37:32.434 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.457482 containerd[1543]: 2025-09-05 00:37:32.434 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia038c45af76 ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.457482 containerd[1543]: 2025-09-05 00:37:32.445 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.457482 containerd[1543]: 2025-09-05 00:37:32.445 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0", GenerateName:"calico-kube-controllers-788bb96d6f-", Namespace:"calico-system", SelfLink:"", UID:"92604847-f3c6-4b7c-9399-1aa229b56af1", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"788bb96d6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966", Pod:"calico-kube-controllers-788bb96d6f-s5fmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia038c45af76", MAC:"fa:07:b3:3d:a8:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:32.457482 containerd[1543]: 2025-09-05 00:37:32.453 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966" Namespace="calico-system" Pod="calico-kube-controllers-788bb96d6f-s5fmw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:32.475185 containerd[1543]: time="2025-09-05T00:37:32.474547796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:32.475185 containerd[1543]: time="2025-09-05T00:37:32.474990475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:32.475185 containerd[1543]: time="2025-09-05T00:37:32.475010795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:32.475185 containerd[1543]: time="2025-09-05T00:37:32.475097595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:32.494814 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:32.518024 containerd[1543]: time="2025-09-05T00:37:32.517979446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788bb96d6f-s5fmw,Uid:92604847-f3c6-4b7c-9399-1aa229b56af1,Namespace:calico-system,Attempt:1,} returns sandbox id \"55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966\"" Sep 5 00:37:32.531709 systemd-networkd[1225]: cali52f51ed53b7: Link UP Sep 5 00:37:32.532217 systemd-networkd[1225]: cali52f51ed53b7: Gained carrier Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.248 [INFO][4326] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.265 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0 coredns-7c65d6cfc9- kube-system ba02548f-dd9a-487d-9c64-7235820cae6f 907 0 2025-09-05 00:36:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qd2ch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali52f51ed53b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.265 [INFO][4326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.296 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" HandleID="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.296 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" HandleID="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338660), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qd2ch", "timestamp":"2025-09-05 00:37:32.296065967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.296 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.429 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.433 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.502 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.509 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.513 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.514 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.517 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.517 [INFO][4365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.518 [INFO][4365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5 Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.522 [INFO][4365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.528 [INFO][4365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.528 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" host="localhost" Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.528 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:32.543700 containerd[1543]: 2025-09-05 00:37:32.528 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" HandleID="k8s-pod-network.03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.544383 containerd[1543]: 2025-09-05 00:37:32.530 [INFO][4326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba02548f-dd9a-487d-9c64-7235820cae6f", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qd2ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52f51ed53b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:32.544383 containerd[1543]: 2025-09-05 00:37:32.530 [INFO][4326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.544383 containerd[1543]: 2025-09-05 00:37:32.530 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52f51ed53b7 ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.544383 containerd[1543]: 2025-09-05 00:37:32.532 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.544383 containerd[1543]: 2025-09-05 00:37:32.532 [INFO][4326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba02548f-dd9a-487d-9c64-7235820cae6f", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5", Pod:"coredns-7c65d6cfc9-qd2ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52f51ed53b7", MAC:"aa:a0:43:db:b2:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:32.544383 containerd[1543]: 2025-09-05 00:37:32.541 [INFO][4326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd2ch" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:32.558866 containerd[1543]: time="2025-09-05T00:37:32.558543984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:32.558866 containerd[1543]: time="2025-09-05T00:37:32.558828223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:32.558866 containerd[1543]: time="2025-09-05T00:37:32.558848263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:32.559101 containerd[1543]: time="2025-09-05T00:37:32.558993463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:32.585495 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:32.604398 containerd[1543]: time="2025-09-05T00:37:32.604349628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd2ch,Uid:ba02548f-dd9a-487d-9c64-7235820cae6f,Namespace:kube-system,Attempt:1,} returns sandbox id \"03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5\"" Sep 5 00:37:32.605773 kubelet[2634]: E0905 00:37:32.605422 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:32.608635 containerd[1543]: time="2025-09-05T00:37:32.608358778Z" level=info msg="CreateContainer within sandbox \"03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:37:32.628508 containerd[1543]: time="2025-09-05T00:37:32.628465487Z" level=info msg="CreateContainer within sandbox \"03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb2ce1973ceecec3eab2a4ba3f0324d028e2bcf18134e2d6b282c23013873f64\"" Sep 5 00:37:32.629425 containerd[1543]: time="2025-09-05T00:37:32.629394165Z" level=info msg="StartContainer for \"bb2ce1973ceecec3eab2a4ba3f0324d028e2bcf18134e2d6b282c23013873f64\"" Sep 5 00:37:32.676437 containerd[1543]: time="2025-09-05T00:37:32.676321526Z" level=info msg="StartContainer for \"bb2ce1973ceecec3eab2a4ba3f0324d028e2bcf18134e2d6b282c23013873f64\" returns successfully" Sep 5 00:37:33.140769 containerd[1543]: time="2025-09-05T00:37:33.140724484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:33.141170 containerd[1543]: time="2025-09-05T00:37:33.141135283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 00:37:33.142190 containerd[1543]: time="2025-09-05T00:37:33.142164720Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:33.145075 containerd[1543]: time="2025-09-05T00:37:33.145026073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:33.146044 containerd[1543]: time="2025-09-05T00:37:33.145775551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.40144805s" Sep 5 00:37:33.146044 containerd[1543]: time="2025-09-05T00:37:33.145806991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 00:37:33.147325 containerd[1543]: time="2025-09-05T00:37:33.147301508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:37:33.148242 containerd[1543]: time="2025-09-05T00:37:33.148211625Z" level=info msg="CreateContainer within sandbox \"3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:37:33.157749 containerd[1543]: time="2025-09-05T00:37:33.157702802Z" level=info msg="CreateContainer within sandbox \"3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4ccbada86702a6216e3de915dba926cce9159a09b13cb00d35e4780c82907001\"" Sep 5 00:37:33.158293 containerd[1543]: time="2025-09-05T00:37:33.158166761Z" level=info msg="StartContainer for \"4ccbada86702a6216e3de915dba926cce9159a09b13cb00d35e4780c82907001\"" Sep 5 00:37:33.204933 containerd[1543]: time="2025-09-05T00:37:33.204874566Z" level=info msg="StartContainer for \"4ccbada86702a6216e3de915dba926cce9159a09b13cb00d35e4780c82907001\" returns successfully" Sep 5 00:37:33.228230 kubelet[2634]: E0905 00:37:33.227873 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:33.237479 kubelet[2634]: I0905 00:37:33.237251 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qd2ch" podStartSLOduration=34.237234727 podStartE2EDuration="34.237234727s" podCreationTimestamp="2025-09-05 00:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:33.236605489 +0000 UTC m=+39.268279217" watchObservedRunningTime="2025-09-05 00:37:33.237234727 +0000 UTC m=+39.268908415" Sep 5 00:37:33.464079 systemd-networkd[1225]: cali90ba76dc84e: Gained IPv6LL Sep 5 00:37:33.592041 systemd-networkd[1225]: cali52f51ed53b7: Gained IPv6LL Sep 5 00:37:34.072927 containerd[1543]: time="2025-09-05T00:37:34.072862163Z" level=info msg="StopPodSandbox for \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\"" Sep 5 00:37:34.073276 containerd[1543]: time="2025-09-05T00:37:34.073240602Z" level=info msg="StopPodSandbox for \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\"" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.122 [INFO][4678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.122 [INFO][4678] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" iface="eth0" netns="/var/run/netns/cni-61eeb490-5975-a6b4-55de-2c3e3dacfee1" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.123 [INFO][4678] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" iface="eth0" netns="/var/run/netns/cni-61eeb490-5975-a6b4-55de-2c3e3dacfee1" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.123 [INFO][4678] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" iface="eth0" netns="/var/run/netns/cni-61eeb490-5975-a6b4-55de-2c3e3dacfee1" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.123 [INFO][4678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.123 [INFO][4678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.140 [INFO][4695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.140 [INFO][4695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.140 [INFO][4695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.149 [WARNING][4695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.149 [INFO][4695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.150 [INFO][4695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:34.154305 containerd[1543]: 2025-09-05 00:37:34.152 [INFO][4678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:34.154692 containerd[1543]: time="2025-09-05T00:37:34.154434129Z" level=info msg="TearDown network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\" successfully" Sep 5 00:37:34.154692 containerd[1543]: time="2025-09-05T00:37:34.154462609Z" level=info msg="StopPodSandbox for \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\" returns successfully" Sep 5 00:37:34.156612 systemd[1]: run-netns-cni\x2d61eeb490\x2d5975\x2da6b4\x2d55de\x2d2c3e3dacfee1.mount: Deactivated successfully. Sep 5 00:37:34.157472 containerd[1543]: time="2025-09-05T00:37:34.157090243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-vqrh4,Uid:1594e321-794f-47ae-a260-d32320cc168b,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.120 [INFO][4679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.120 [INFO][4679] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" iface="eth0" netns="/var/run/netns/cni-38f1150d-601c-1c0d-8c69-d690f9d66bcf" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.121 [INFO][4679] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" iface="eth0" netns="/var/run/netns/cni-38f1150d-601c-1c0d-8c69-d690f9d66bcf" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.121 [INFO][4679] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" iface="eth0" netns="/var/run/netns/cni-38f1150d-601c-1c0d-8c69-d690f9d66bcf" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.121 [INFO][4679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.121 [INFO][4679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.139 [INFO][4693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.140 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.150 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.159 [WARNING][4693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.159 [INFO][4693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.161 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:34.165327 containerd[1543]: 2025-09-05 00:37:34.162 [INFO][4679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:34.166385 containerd[1543]: time="2025-09-05T00:37:34.166252981Z" level=info msg="TearDown network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\" successfully" Sep 5 00:37:34.166385 containerd[1543]: time="2025-09-05T00:37:34.166303301Z" level=info msg="StopPodSandbox for \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\" returns successfully" Sep 5 00:37:34.166802 containerd[1543]: time="2025-09-05T00:37:34.166778300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lpmx,Uid:b629d959-9ce0-4662-8a0e-a74c6f7f28b5,Namespace:calico-system,Attempt:1,}" Sep 5 00:37:34.167268 systemd[1]: run-netns-cni\x2d38f1150d\x2d601c\x2d1c0d\x2d8c69\x2dd690f9d66bcf.mount: Deactivated successfully. Sep 5 00:37:34.232044 systemd-networkd[1225]: cali82e19447ccd: Gained IPv6LL Sep 5 00:37:34.234292 kubelet[2634]: E0905 00:37:34.234266 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:34.275775 systemd-networkd[1225]: cali009ff3f38e0: Link UP Sep 5 00:37:34.276318 systemd-networkd[1225]: cali009ff3f38e0: Gained carrier Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.196 [INFO][4709] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.210 [INFO][4709] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0 calico-apiserver-7d5cb679cb- calico-apiserver 1594e321-794f-47ae-a260-d32320cc168b 945 0 2025-09-05 00:37:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d5cb679cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d5cb679cb-vqrh4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali009ff3f38e0 [] [] }} ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.210 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.234 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" HandleID="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.234 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" HandleID="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d5cb679cb-vqrh4", "timestamp":"2025-09-05 00:37:34.234772538 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.235 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.235 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.235 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.244 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.247 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.251 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.253 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.255 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.255 [INFO][4738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.256 [INFO][4738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.260 [INFO][4738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.266 [INFO][4738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.266 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" host="localhost" Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.266 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:34.286419 containerd[1543]: 2025-09-05 00:37:34.266 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" HandleID="k8s-pod-network.dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.286993 containerd[1543]: 2025-09-05 00:37:34.273 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1594e321-794f-47ae-a260-d32320cc168b", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d5cb679cb-vqrh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali009ff3f38e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:34.286993 containerd[1543]: 2025-09-05 00:37:34.273 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.286993 containerd[1543]: 2025-09-05 00:37:34.273 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali009ff3f38e0 ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.286993 containerd[1543]: 2025-09-05 00:37:34.276 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.286993 containerd[1543]: 2025-09-05 00:37:34.276 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1594e321-794f-47ae-a260-d32320cc168b", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad", Pod:"calico-apiserver-7d5cb679cb-vqrh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali009ff3f38e0", MAC:"72:ab:9d:92:b1:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:34.286993 containerd[1543]: 2025-09-05 00:37:34.284 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad" Namespace="calico-apiserver" Pod="calico-apiserver-7d5cb679cb-vqrh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:34.300737 containerd[1543]: time="2025-09-05T00:37:34.300539821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:34.300737 containerd[1543]: time="2025-09-05T00:37:34.300582061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:34.300737 containerd[1543]: time="2025-09-05T00:37:34.300592501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:34.300737 containerd[1543]: time="2025-09-05T00:37:34.300661821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:34.324594 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:34.343055 containerd[1543]: time="2025-09-05T00:37:34.343020480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5cb679cb-vqrh4,Uid:1594e321-794f-47ae-a260-d32320cc168b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad\"" Sep 5 00:37:34.369748 systemd-networkd[1225]: cali94694352815: Link UP Sep 5 00:37:34.370077 systemd-networkd[1225]: cali94694352815: Gained carrier Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.198 [INFO][4721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.214 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8lpmx-eth0 csi-node-driver- calico-system b629d959-9ce0-4662-8a0e-a74c6f7f28b5 944 0 2025-09-05 00:37:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8lpmx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali94694352815 [] [] }} ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.214 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.240 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" HandleID="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.240 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" HandleID="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8lpmx", "timestamp":"2025-09-05 00:37:34.240045325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.240 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.266 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.266 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.345 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.350 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.353 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.355 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.357 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.357 [INFO][4741] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.358 [INFO][4741] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881 Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.361 [INFO][4741] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.366 [INFO][4741] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.366 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" host="localhost" Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.366 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:34.382249 containerd[1543]: 2025-09-05 00:37:34.366 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" HandleID="k8s-pod-network.802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.382744 containerd[1543]: 2025-09-05 00:37:34.367 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lpmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b629d959-9ce0-4662-8a0e-a74c6f7f28b5", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8lpmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94694352815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:34.382744 containerd[1543]: 2025-09-05 00:37:34.368 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.382744 containerd[1543]: 2025-09-05 00:37:34.368 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94694352815 ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.382744 containerd[1543]: 2025-09-05 00:37:34.370 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.382744 containerd[1543]: 2025-09-05 00:37:34.371 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lpmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b629d959-9ce0-4662-8a0e-a74c6f7f28b5", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881", Pod:"csi-node-driver-8lpmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94694352815", MAC:"d2:68:97:75:33:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:34.382744 containerd[1543]: 2025-09-05 00:37:34.380 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881" Namespace="calico-system" Pod="csi-node-driver-8lpmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:34.395870 containerd[1543]: time="2025-09-05T00:37:34.395336315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:34.395870 containerd[1543]: time="2025-09-05T00:37:34.395707914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:34.395870 containerd[1543]: time="2025-09-05T00:37:34.395720834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:34.395870 containerd[1543]: time="2025-09-05T00:37:34.395797794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:34.417752 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:34.424172 systemd-networkd[1225]: calia038c45af76: Gained IPv6LL Sep 5 00:37:34.450813 containerd[1543]: time="2025-09-05T00:37:34.450726543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lpmx,Uid:b629d959-9ce0-4662-8a0e-a74c6f7f28b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881\"" Sep 5 00:37:35.044527 kubelet[2634]: I0905 00:37:35.044312 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:35.044691 kubelet[2634]: E0905 00:37:35.044644 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:35.073058 containerd[1543]: time="2025-09-05T00:37:35.072645827Z" level=info msg="StopPodSandbox for \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\"" Sep 5 00:37:35.073989 containerd[1543]: time="2025-09-05T00:37:35.073944344Z" level=info msg="StopPodSandbox for \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\"" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.140 [INFO][4895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.140 [INFO][4895] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" iface="eth0" netns="/var/run/netns/cni-6d657c1b-453a-8919-54ae-9dd1a8dc206c" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.140 [INFO][4895] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" iface="eth0" netns="/var/run/netns/cni-6d657c1b-453a-8919-54ae-9dd1a8dc206c" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.141 [INFO][4895] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" iface="eth0" netns="/var/run/netns/cni-6d657c1b-453a-8919-54ae-9dd1a8dc206c" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.141 [INFO][4895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.141 [INFO][4895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.174 [INFO][4916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.174 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.174 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.184 [WARNING][4916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.185 [INFO][4916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.188 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:35.198165 containerd[1543]: 2025-09-05 00:37:35.195 [INFO][4895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:35.198707 containerd[1543]: time="2025-09-05T00:37:35.198302456Z" level=info msg="TearDown network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\" successfully" Sep 5 00:37:35.198707 containerd[1543]: time="2025-09-05T00:37:35.198328335Z" level=info msg="StopPodSandbox for \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\" returns successfully" Sep 5 00:37:35.198769 kubelet[2634]: E0905 00:37:35.198646 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:35.199926 containerd[1543]: time="2025-09-05T00:37:35.198992294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n6pct,Uid:2cdc98cd-288c-46fa-8622-2d2669974b33,Namespace:kube-system,Attempt:1,}" Sep 5 00:37:35.203078 systemd[1]: run-netns-cni\x2d6d657c1b\x2d453a\x2d8919\x2d54ae\x2d9dd1a8dc206c.mount: Deactivated successfully. Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.145 [INFO][4885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.145 [INFO][4885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" iface="eth0" netns="/var/run/netns/cni-ade6fc18-76ab-4ad3-a541-4015ea84ae27" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.146 [INFO][4885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" iface="eth0" netns="/var/run/netns/cni-ade6fc18-76ab-4ad3-a541-4015ea84ae27" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.149 [INFO][4885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" iface="eth0" netns="/var/run/netns/cni-ade6fc18-76ab-4ad3-a541-4015ea84ae27" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.149 [INFO][4885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.149 [INFO][4885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.182 [INFO][4922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.182 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.188 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.204 [WARNING][4922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.204 [INFO][4922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.205 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:35.209725 containerd[1543]: 2025-09-05 00:37:35.207 [INFO][4885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:35.210208 containerd[1543]: time="2025-09-05T00:37:35.210027348Z" level=info msg="TearDown network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\" successfully" Sep 5 00:37:35.210208 containerd[1543]: time="2025-09-05T00:37:35.210058628Z" level=info msg="StopPodSandbox for \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\" returns successfully" Sep 5 00:37:35.212434 systemd[1]: run-netns-cni\x2dade6fc18\x2d76ab\x2d4ad3\x2da541\x2d4015ea84ae27.mount: Deactivated successfully. Sep 5 00:37:35.213770 containerd[1543]: time="2025-09-05T00:37:35.213745620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5d4st,Uid:3c0cb5b9-141e-499d-8a34-a9fa818a010c,Namespace:calico-system,Attempt:1,}" Sep 5 00:37:35.240973 kubelet[2634]: E0905 00:37:35.240729 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:35.241286 kubelet[2634]: E0905 00:37:35.241031 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:35.335222 systemd-networkd[1225]: cali624b092adc8: Link UP Sep 5 00:37:35.335841 systemd-networkd[1225]: cali624b092adc8: Gained carrier Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.258 [INFO][4959] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.276 [INFO][4959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--5d4st-eth0 goldmane-7988f88666- calico-system 3c0cb5b9-141e-499d-8a34-a9fa818a010c 967 0 2025-09-05 00:37:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-5d4st eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali624b092adc8 [] [] }} ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.276 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.298 [INFO][4981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" HandleID="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.298 [INFO][4981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" HandleID="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-5d4st", "timestamp":"2025-09-05 00:37:35.298735143 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.298 [INFO][4981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.298 [INFO][4981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.299 [INFO][4981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.307 [INFO][4981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.311 [INFO][4981] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.314 [INFO][4981] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.316 [INFO][4981] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.318 [INFO][4981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.318 [INFO][4981] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.320 [INFO][4981] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.323 [INFO][4981] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.328 [INFO][4981] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.328 [INFO][4981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" host="localhost" Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.328 [INFO][4981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:35.351615 containerd[1543]: 2025-09-05 00:37:35.328 [INFO][4981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" HandleID="k8s-pod-network.3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.352169 containerd[1543]: 2025-09-05 00:37:35.331 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--5d4st-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3c0cb5b9-141e-499d-8a34-a9fa818a010c", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-5d4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali624b092adc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:35.352169 containerd[1543]: 2025-09-05 00:37:35.331 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.352169 containerd[1543]: 2025-09-05 00:37:35.331 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali624b092adc8 ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.352169 containerd[1543]: 2025-09-05 00:37:35.336 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.352169 containerd[1543]: 2025-09-05 00:37:35.340 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--5d4st-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3c0cb5b9-141e-499d-8a34-a9fa818a010c", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d", Pod:"goldmane-7988f88666-5d4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali624b092adc8", MAC:"be:09:14:10:05:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:35.352169 containerd[1543]: 2025-09-05 00:37:35.349 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d" Namespace="calico-system" Pod="goldmane-7988f88666-5d4st" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:35.365586 containerd[1543]: time="2025-09-05T00:37:35.365503348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:35.365586 containerd[1543]: time="2025-09-05T00:37:35.365567628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:35.365775 containerd[1543]: time="2025-09-05T00:37:35.365583508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:35.365835 containerd[1543]: time="2025-09-05T00:37:35.365754628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:35.402736 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:35.424369 containerd[1543]: time="2025-09-05T00:37:35.424318892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-5d4st,Uid:3c0cb5b9-141e-499d-8a34-a9fa818a010c,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d\"" Sep 5 00:37:35.441402 systemd-networkd[1225]: cali4827ecddfcf: Link UP Sep 5 00:37:35.441998 systemd-networkd[1225]: cali4827ecddfcf: Gained carrier Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.262 [INFO][4933] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.279 [INFO][4933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0 coredns-7c65d6cfc9- kube-system 2cdc98cd-288c-46fa-8622-2d2669974b33 966 0 2025-09-05 00:36:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-n6pct eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4827ecddfcf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.279 [INFO][4933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.303 [INFO][4987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" HandleID="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.304 [INFO][4987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" HandleID="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004941f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-n6pct", "timestamp":"2025-09-05 00:37:35.303923691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.304 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.328 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.329 [INFO][4987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.408 [INFO][4987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.413 [INFO][4987] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.420 [INFO][4987] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.422 [INFO][4987] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.425 [INFO][4987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.425 [INFO][4987] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.428 [INFO][4987] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3 Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.431 [INFO][4987] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.436 [INFO][4987] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.436 [INFO][4987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" host="localhost" Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.436 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:35.455529 containerd[1543]: 2025-09-05 00:37:35.436 [INFO][4987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" HandleID="k8s-pod-network.49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.456056 containerd[1543]: 2025-09-05 00:37:35.438 [INFO][4933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2cdc98cd-288c-46fa-8622-2d2669974b33", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-n6pct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4827ecddfcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:35.456056 containerd[1543]: 2025-09-05 00:37:35.438 [INFO][4933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.456056 containerd[1543]: 2025-09-05 00:37:35.438 [INFO][4933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4827ecddfcf ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.456056 containerd[1543]: 2025-09-05 00:37:35.442 [INFO][4933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.456056 containerd[1543]: 2025-09-05 00:37:35.442 [INFO][4933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2cdc98cd-288c-46fa-8622-2d2669974b33", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3", Pod:"coredns-7c65d6cfc9-n6pct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4827ecddfcf", MAC:"36:2c:8b:96:3b:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:35.456056 containerd[1543]: 2025-09-05 00:37:35.453 [INFO][4933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n6pct" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:35.471975 containerd[1543]: time="2025-09-05T00:37:35.471754142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:37:35.471975 containerd[1543]: time="2025-09-05T00:37:35.471803302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:37:35.471975 containerd[1543]: time="2025-09-05T00:37:35.471813302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:35.471975 containerd[1543]: time="2025-09-05T00:37:35.471917822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:37:35.495357 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:37:35.512002 containerd[1543]: time="2025-09-05T00:37:35.511968889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n6pct,Uid:2cdc98cd-288c-46fa-8622-2d2669974b33,Namespace:kube-system,Attempt:1,} returns sandbox id \"49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3\"" Sep 5 00:37:35.512588 kubelet[2634]: E0905 00:37:35.512563 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:35.514449 containerd[1543]: time="2025-09-05T00:37:35.514422883Z" level=info msg="CreateContainer within sandbox \"49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:37:35.538750 containerd[1543]: time="2025-09-05T00:37:35.538643267Z" level=info msg="CreateContainer within sandbox \"49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7702455cf2760fe4dcff9c6ff9731633ffe2e0a8c8749ad171350fd5ab28c1aa\"" Sep 5 00:37:35.539215 containerd[1543]: time="2025-09-05T00:37:35.539091946Z" level=info msg="StartContainer for \"7702455cf2760fe4dcff9c6ff9731633ffe2e0a8c8749ad171350fd5ab28c1aa\"" Sep 5 00:37:35.582155 containerd[1543]: time="2025-09-05T00:37:35.582118286Z" level=info msg="StartContainer for \"7702455cf2760fe4dcff9c6ff9731633ffe2e0a8c8749ad171350fd5ab28c1aa\" returns successfully" Sep 5 00:37:35.640108 systemd-networkd[1225]: cali009ff3f38e0: Gained IPv6LL Sep 5 00:37:35.641195 systemd-networkd[1225]: cali94694352815: Gained IPv6LL Sep 5 00:37:35.953547 containerd[1543]: time="2025-09-05T00:37:35.953105267Z" level=error msg="Failed to get usage for snapshot \"b7c234b008c6d38b9428331348f24a3b5dba813895439f79190cf6364b11f951\"" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79/fs/etc/service/enabled/felix/supervise/pid.new: no such file or directory" Sep 5 00:37:36.045939 kernel: bpftool[5149]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:37:36.197701 systemd-networkd[1225]: vxlan.calico: Link UP Sep 5 00:37:36.197707 systemd-networkd[1225]: vxlan.calico: Gained carrier Sep 5 00:37:36.247117 kubelet[2634]: E0905 00:37:36.246770 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:36.264805 kubelet[2634]: I0905 00:37:36.264580 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n6pct" podStartSLOduration=37.264563842 podStartE2EDuration="37.264563842s" podCreationTimestamp="2025-09-05 00:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:36.262672166 +0000 UTC m=+42.294345894" watchObservedRunningTime="2025-09-05 00:37:36.264563842 +0000 UTC m=+42.296237570" Sep 5 00:37:37.048498 systemd-networkd[1225]: cali4827ecddfcf: Gained IPv6LL Sep 5 00:37:37.250571 kubelet[2634]: E0905 00:37:37.250524 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:37.368047 systemd-networkd[1225]: cali624b092adc8: Gained IPv6LL Sep 5 00:37:37.560072 systemd-networkd[1225]: vxlan.calico: Gained IPv6LL Sep 5 00:37:38.252373 kubelet[2634]: E0905 00:37:38.252328 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:38.918823 kubelet[2634]: I0905 00:37:38.918778 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:39.222136 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:49116.service - OpenSSH per-connection server daemon (10.0.0.1:49116). Sep 5 00:37:39.264292 sshd[5285]: Accepted publickey for core from 10.0.0.1 port 49116 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:37:39.265616 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:39.269816 systemd-logind[1518]: New session 8 of user core. Sep 5 00:37:39.275158 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:37:39.495499 sshd[5285]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:39.498723 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:49116.service: Deactivated successfully. Sep 5 00:37:39.500865 systemd-logind[1518]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:37:39.500975 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:37:39.502979 systemd-logind[1518]: Removed session 8. Sep 5 00:37:44.506133 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:53888.service - OpenSSH per-connection server daemon (10.0.0.1:53888). Sep 5 00:37:44.546596 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 53888 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:37:44.547984 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:44.551430 systemd-logind[1518]: New session 9 of user core. Sep 5 00:37:44.559117 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:37:44.727100 sshd[5311]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:44.730111 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:53888.service: Deactivated successfully. Sep 5 00:37:44.732381 systemd-logind[1518]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:37:44.732441 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:37:44.733468 systemd-logind[1518]: Removed session 9. Sep 5 00:37:47.243506 containerd[1543]: time="2025-09-05T00:37:47.243460471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:47.244141 containerd[1543]: time="2025-09-05T00:37:47.244117470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 00:37:47.244829 containerd[1543]: time="2025-09-05T00:37:47.244747789Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:47.248355 containerd[1543]: time="2025-09-05T00:37:47.247259105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:47.248355 containerd[1543]: time="2025-09-05T00:37:47.247998783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 14.100667276s" Sep 5 00:37:47.248355 containerd[1543]: time="2025-09-05T00:37:47.248024063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:37:47.249530 containerd[1543]: time="2025-09-05T00:37:47.249510381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:37:47.250852 containerd[1543]: time="2025-09-05T00:37:47.250143659Z" level=info msg="CreateContainer within sandbox \"0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:37:47.267269 containerd[1543]: time="2025-09-05T00:37:47.267169629Z" level=info msg="CreateContainer within sandbox \"0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"337e2b21dd6a69afeeaf173df215c698ce19b2489cb0e4bc13c913e34ddcf4ee\"" Sep 5 00:37:47.267920 containerd[1543]: time="2025-09-05T00:37:47.267862988Z" level=info msg="StartContainer for \"337e2b21dd6a69afeeaf173df215c698ce19b2489cb0e4bc13c913e34ddcf4ee\"" Sep 5 00:37:47.334723 containerd[1543]: time="2025-09-05T00:37:47.334684268Z" level=info msg="StartContainer for \"337e2b21dd6a69afeeaf173df215c698ce19b2489cb0e4bc13c913e34ddcf4ee\" returns successfully" Sep 5 00:37:48.312901 kubelet[2634]: I0905 00:37:48.312438 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d5cb679cb-llcvx" podStartSLOduration=25.50748006 podStartE2EDuration="40.312422809s" podCreationTimestamp="2025-09-05 00:37:08 +0000 UTC" firstStartedPulling="2025-09-05 00:37:32.444067113 +0000 UTC m=+38.475740841" lastFinishedPulling="2025-09-05 00:37:47.249009902 +0000 UTC m=+53.280683590" observedRunningTime="2025-09-05 00:37:48.31198009 +0000 UTC m=+54.343653818" watchObservedRunningTime="2025-09-05 00:37:48.312422809 +0000 UTC m=+54.344096537" Sep 5 00:37:49.187986 containerd[1543]: time="2025-09-05T00:37:49.187598554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:49.188396 containerd[1543]: time="2025-09-05T00:37:49.188341473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 00:37:49.189019 containerd[1543]: time="2025-09-05T00:37:49.188970192Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:49.191225 containerd[1543]: time="2025-09-05T00:37:49.191176868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:49.192099 containerd[1543]: time="2025-09-05T00:37:49.191704547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.941405328s" Sep 5 00:37:49.192099 containerd[1543]: time="2025-09-05T00:37:49.191746507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 00:37:49.192931 containerd[1543]: time="2025-09-05T00:37:49.192881665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:37:49.198699 containerd[1543]: time="2025-09-05T00:37:49.198671135Z" level=info msg="CreateContainer within sandbox \"55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:37:49.223331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729851974.mount: Deactivated successfully. Sep 5 00:37:49.224759 containerd[1543]: time="2025-09-05T00:37:49.224701050Z" level=info msg="CreateContainer within sandbox \"55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3d54d1f0c0a03a14be883226705f2f01958d34e80a3397c6ffd5865be072be9e\"" Sep 5 00:37:49.225294 containerd[1543]: time="2025-09-05T00:37:49.225269169Z" level=info msg="StartContainer for \"3d54d1f0c0a03a14be883226705f2f01958d34e80a3397c6ffd5865be072be9e\"" Sep 5 00:37:49.361474 containerd[1543]: time="2025-09-05T00:37:49.361399893Z" level=info msg="StartContainer for \"3d54d1f0c0a03a14be883226705f2f01958d34e80a3397c6ffd5865be072be9e\" returns successfully" Sep 5 00:37:49.366828 kubelet[2634]: I0905 00:37:49.366791 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:49.378731 kubelet[2634]: I0905 00:37:49.377867 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-788bb96d6f-s5fmw" podStartSLOduration=22.704561163 podStartE2EDuration="39.377851225s" podCreationTimestamp="2025-09-05 00:37:10 +0000 UTC" firstStartedPulling="2025-09-05 00:37:32.519419443 +0000 UTC m=+38.551093171" lastFinishedPulling="2025-09-05 00:37:49.192709545 +0000 UTC m=+55.224383233" observedRunningTime="2025-09-05 00:37:49.376864946 +0000 UTC m=+55.408538674" watchObservedRunningTime="2025-09-05 00:37:49.377851225 +0000 UTC m=+55.409524913" Sep 5 00:37:49.740144 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:53900.service - OpenSSH per-connection server daemon (10.0.0.1:53900). Sep 5 00:37:49.783086 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 53900 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:37:49.785316 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:49.789986 systemd-logind[1518]: New session 10 of user core. Sep 5 00:37:49.802233 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:37:50.059355 sshd[5459]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:50.065143 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:55894.service - OpenSSH per-connection server daemon (10.0.0.1:55894). Sep 5 00:37:50.065519 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:53900.service: Deactivated successfully. Sep 5 00:37:50.068993 systemd-logind[1518]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:37:50.069260 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:37:50.071242 systemd-logind[1518]: Removed session 10. Sep 5 00:37:50.101366 sshd[5473]: Accepted publickey for core from 10.0.0.1 port 55894 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:37:50.102855 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:50.106788 systemd-logind[1518]: New session 11 of user core. Sep 5 00:37:50.117213 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:37:50.310429 sshd[5473]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:50.320263 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:55910.service - OpenSSH per-connection server daemon (10.0.0.1:55910). Sep 5 00:37:50.321220 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:55894.service: Deactivated successfully. Sep 5 00:37:50.324758 systemd-logind[1518]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:37:50.328004 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:37:50.331547 systemd-logind[1518]: Removed session 11. Sep 5 00:37:50.377539 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 55910 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:37:50.378792 sshd[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:50.383813 systemd-logind[1518]: New session 12 of user core. Sep 5 00:37:50.395246 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:37:50.644832 sshd[5487]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:50.652500 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:55910.service: Deactivated successfully. Sep 5 00:37:50.656076 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:37:50.656711 systemd-logind[1518]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:37:50.657788 systemd-logind[1518]: Removed session 12. Sep 5 00:37:50.841184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112554244.mount: Deactivated successfully. Sep 5 00:37:50.862950 containerd[1543]: time="2025-09-05T00:37:50.862882512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:50.863580 containerd[1543]: time="2025-09-05T00:37:50.863521031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 00:37:50.864328 containerd[1543]: time="2025-09-05T00:37:50.864304430Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:50.867019 containerd[1543]: time="2025-09-05T00:37:50.866966225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:50.868591 containerd[1543]: time="2025-09-05T00:37:50.868191703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.675262238s" Sep 5 00:37:50.868591 containerd[1543]: time="2025-09-05T00:37:50.868226983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 00:37:50.869612 containerd[1543]: time="2025-09-05T00:37:50.869591381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:37:50.870982 containerd[1543]: time="2025-09-05T00:37:50.870949619Z" level=info msg="CreateContainer within sandbox \"3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:37:50.883735 containerd[1543]: time="2025-09-05T00:37:50.883683597Z" level=info msg="CreateContainer within sandbox \"3ef3f035cf10ab6185c9cb0c0a7ca79e87bd5d850e09a1d4d859ad924ae9b02d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"88808ecdfef703212708df346a1f0c3102c7273a5e9164ecdea2252364b89c96\"" Sep 5 00:37:50.884284 containerd[1543]: time="2025-09-05T00:37:50.884245836Z" level=info msg="StartContainer for \"88808ecdfef703212708df346a1f0c3102c7273a5e9164ecdea2252364b89c96\"" Sep 5 00:37:50.947069 containerd[1543]: time="2025-09-05T00:37:50.946550010Z" level=info msg="StartContainer for \"88808ecdfef703212708df346a1f0c3102c7273a5e9164ecdea2252364b89c96\" returns successfully" Sep 5 00:37:51.125038 containerd[1543]: time="2025-09-05T00:37:51.124992788Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:51.125666 containerd[1543]: time="2025-09-05T00:37:51.125572507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:37:51.127831 containerd[1543]: time="2025-09-05T00:37:51.127790943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 258.080362ms" Sep 5 00:37:51.127831 containerd[1543]: time="2025-09-05T00:37:51.127830943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:37:51.128966 containerd[1543]: time="2025-09-05T00:37:51.128942701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:37:51.130210 containerd[1543]: time="2025-09-05T00:37:51.129931019Z" level=info msg="CreateContainer within sandbox \"dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:37:51.145448 containerd[1543]: time="2025-09-05T00:37:51.145406833Z" level=info msg="CreateContainer within sandbox \"dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"27dbe12455a528f33c9b4e6802a803672c1377bbf6588dbaeec131869b73a1f4\"" Sep 5 00:37:51.146284 containerd[1543]: time="2025-09-05T00:37:51.146235752Z" level=info msg="StartContainer for \"27dbe12455a528f33c9b4e6802a803672c1377bbf6588dbaeec131869b73a1f4\"" Sep 5 00:37:51.204174 containerd[1543]: time="2025-09-05T00:37:51.203965735Z" level=info msg="StartContainer for \"27dbe12455a528f33c9b4e6802a803672c1377bbf6588dbaeec131869b73a1f4\" returns successfully" Sep 5 00:37:51.411483 kubelet[2634]: I0905 00:37:51.410580 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7477dc8b74-gzlgn" podStartSLOduration=1.285059468 podStartE2EDuration="20.410347787s" podCreationTimestamp="2025-09-05 00:37:31 +0000 UTC" firstStartedPulling="2025-09-05 00:37:31.744056382 +0000 UTC m=+37.775730070" lastFinishedPulling="2025-09-05 00:37:50.869344661 +0000 UTC m=+56.901018389" observedRunningTime="2025-09-05 00:37:51.404591157 +0000 UTC m=+57.436264845" watchObservedRunningTime="2025-09-05 00:37:51.410347787 +0000 UTC m=+57.442021555" Sep 5 00:37:51.412424 kubelet[2634]: I0905 00:37:51.412382 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d5cb679cb-vqrh4" podStartSLOduration=26.627950118 podStartE2EDuration="43.412371903s" podCreationTimestamp="2025-09-05 00:37:08 +0000 UTC" firstStartedPulling="2025-09-05 00:37:34.344072637 +0000 UTC m=+40.375746325" lastFinishedPulling="2025-09-05 00:37:51.128494382 +0000 UTC m=+57.160168110" observedRunningTime="2025-09-05 00:37:51.391860298 +0000 UTC m=+57.423534106" watchObservedRunningTime="2025-09-05 00:37:51.412371903 +0000 UTC m=+57.444045591" Sep 5 00:37:52.247677 containerd[1543]: time="2025-09-05T00:37:52.247632382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:52.248150 containerd[1543]: time="2025-09-05T00:37:52.248115341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 00:37:52.249422 containerd[1543]: time="2025-09-05T00:37:52.249383579Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:52.263718 containerd[1543]: time="2025-09-05T00:37:52.263068596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:52.263718 containerd[1543]: time="2025-09-05T00:37:52.263557035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.134581614s" Sep 5 00:37:52.263718 containerd[1543]: time="2025-09-05T00:37:52.263590155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 00:37:52.265419 containerd[1543]: time="2025-09-05T00:37:52.265302752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:37:52.266562 containerd[1543]: time="2025-09-05T00:37:52.266526390Z" level=info msg="CreateContainer within sandbox \"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:37:52.280023 containerd[1543]: time="2025-09-05T00:37:52.279976808Z" level=info msg="CreateContainer within sandbox \"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"493078e71415e31c41e344f47993bc485c9bbdfdef00d1dce375fd04e2b4e1bd\"" Sep 5 00:37:52.280950 containerd[1543]: time="2025-09-05T00:37:52.280845206Z" level=info msg="StartContainer for \"493078e71415e31c41e344f47993bc485c9bbdfdef00d1dce375fd04e2b4e1bd\"" Sep 5 00:37:52.336892 containerd[1543]: time="2025-09-05T00:37:52.336770153Z" level=info msg="StartContainer for \"493078e71415e31c41e344f47993bc485c9bbdfdef00d1dce375fd04e2b4e1bd\" returns successfully" Sep 5 00:37:52.383077 kubelet[2634]: I0905 00:37:52.383040 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:37:54.057007 containerd[1543]: time="2025-09-05T00:37:54.056922755Z" level=info msg="StopPodSandbox for \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\"" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.091 [WARNING][5643] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--5d4st-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3c0cb5b9-141e-499d-8a34-a9fa818a010c", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d", Pod:"goldmane-7988f88666-5d4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali624b092adc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.092 [INFO][5643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.092 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" iface="eth0" netns="" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.092 [INFO][5643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.092 [INFO][5643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.116 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.116 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.116 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.127 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.127 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.129 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.132625 containerd[1543]: 2025-09-05 00:37:54.130 [INFO][5643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.133053 containerd[1543]: time="2025-09-05T00:37:54.132658712Z" level=info msg="TearDown network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\" successfully" Sep 5 00:37:54.133053 containerd[1543]: time="2025-09-05T00:37:54.132681272Z" level=info msg="StopPodSandbox for \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\" returns successfully" Sep 5 00:37:54.133480 containerd[1543]: time="2025-09-05T00:37:54.133448031Z" level=info msg="RemovePodSandbox for \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\"" Sep 5 00:37:54.138956 containerd[1543]: time="2025-09-05T00:37:54.138899582Z" level=info msg="Forcibly stopping sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\"" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.170 [WARNING][5672] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--5d4st-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3c0cb5b9-141e-499d-8a34-a9fa818a010c", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d", Pod:"goldmane-7988f88666-5d4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali624b092adc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.170 [INFO][5672] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.170 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" iface="eth0" netns="" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.170 [INFO][5672] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.170 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.191 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.191 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.191 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.203 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.203 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" HandleID="k8s-pod-network.ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Workload="localhost-k8s-goldmane--7988f88666--5d4st-eth0" Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.204 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.207702 containerd[1543]: 2025-09-05 00:37:54.205 [INFO][5672] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9" Sep 5 00:37:54.208170 containerd[1543]: time="2025-09-05T00:37:54.207737070Z" level=info msg="TearDown network for sandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\" successfully" Sep 5 00:37:54.253357 containerd[1543]: time="2025-09-05T00:37:54.253303757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:54.253465 containerd[1543]: time="2025-09-05T00:37:54.253396116Z" level=info msg="RemovePodSandbox \"ff06cbdff07230d90c1080fdf885cd116357f9f2a822e873e8ce05e53465e2a9\" returns successfully" Sep 5 00:37:54.253944 containerd[1543]: time="2025-09-05T00:37:54.253899676Z" level=info msg="StopPodSandbox for \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\"" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.292 [WARNING][5702] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1594e321-794f-47ae-a260-d32320cc168b", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad", Pod:"calico-apiserver-7d5cb679cb-vqrh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali009ff3f38e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.292 [INFO][5702] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.292 [INFO][5702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" iface="eth0" netns="" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.292 [INFO][5702] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.292 [INFO][5702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.321 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.321 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.321 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.336 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.336 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.338 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.343001 containerd[1543]: 2025-09-05 00:37:54.339 [INFO][5702] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.343001 containerd[1543]: time="2025-09-05T00:37:54.342139092Z" level=info msg="TearDown network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\" successfully" Sep 5 00:37:54.343001 containerd[1543]: time="2025-09-05T00:37:54.342164852Z" level=info msg="StopPodSandbox for \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\" returns successfully" Sep 5 00:37:54.343001 containerd[1543]: time="2025-09-05T00:37:54.342746851Z" level=info msg="RemovePodSandbox for \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\"" Sep 5 00:37:54.343001 containerd[1543]: time="2025-09-05T00:37:54.342778011Z" level=info msg="Forcibly stopping sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\"" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.379 [WARNING][5730] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1594e321-794f-47ae-a260-d32320cc168b", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc66959baf6b69d035f4675a74235c33fadcba47399659a7893c7d3756ca35ad", Pod:"calico-apiserver-7d5cb679cb-vqrh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali009ff3f38e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.379 [INFO][5730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.379 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" iface="eth0" netns="" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.379 [INFO][5730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.379 [INFO][5730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.401 [INFO][5738] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.403 [INFO][5738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.403 [INFO][5738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.412 [WARNING][5738] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.412 [INFO][5738] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" HandleID="k8s-pod-network.c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--vqrh4-eth0" Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.414 [INFO][5738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.417238 containerd[1543]: 2025-09-05 00:37:54.415 [INFO][5730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86" Sep 5 00:37:54.417646 containerd[1543]: time="2025-09-05T00:37:54.417273930Z" level=info msg="TearDown network for sandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\" successfully" Sep 5 00:37:54.420856 containerd[1543]: time="2025-09-05T00:37:54.420017406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:54.420856 containerd[1543]: time="2025-09-05T00:37:54.420085766Z" level=info msg="RemovePodSandbox \"c6677004d3e3116f7a20e5ddc1cfbad11d77aa3643987ccdae595948a730dd86\" returns successfully" Sep 5 00:37:54.421321 containerd[1543]: time="2025-09-05T00:37:54.421297444Z" level=info msg="StopPodSandbox for \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\"" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.456 [WARNING][5756] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" WorkloadEndpoint="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.456 [INFO][5756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.456 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" iface="eth0" netns="" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.456 [INFO][5756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.456 [INFO][5756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.476 [INFO][5764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.476 [INFO][5764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.476 [INFO][5764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.486 [WARNING][5764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.486 [INFO][5764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.487 [INFO][5764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.491323 containerd[1543]: 2025-09-05 00:37:54.489 [INFO][5756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.492199 containerd[1543]: time="2025-09-05T00:37:54.491330250Z" level=info msg="TearDown network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\" successfully" Sep 5 00:37:54.492199 containerd[1543]: time="2025-09-05T00:37:54.491354850Z" level=info msg="StopPodSandbox for \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\" returns successfully" Sep 5 00:37:54.492199 containerd[1543]: time="2025-09-05T00:37:54.491892049Z" level=info msg="RemovePodSandbox for \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\"" Sep 5 00:37:54.492199 containerd[1543]: time="2025-09-05T00:37:54.491941449Z" level=info msg="Forcibly stopping sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\"" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.527 [WARNING][5782] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" WorkloadEndpoint="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.527 [INFO][5782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.527 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" iface="eth0" netns="" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.527 [INFO][5782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.527 [INFO][5782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.565 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.566 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.566 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.576 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.576 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" HandleID="k8s-pod-network.4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Workload="localhost-k8s-whisker--6f54597f5f--zzxvr-eth0" Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.577 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.584665 containerd[1543]: 2025-09-05 00:37:54.580 [INFO][5782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5" Sep 5 00:37:54.585053 containerd[1543]: time="2025-09-05T00:37:54.584768338Z" level=info msg="TearDown network for sandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\" successfully" Sep 5 00:37:54.588893 containerd[1543]: time="2025-09-05T00:37:54.588584372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:54.588893 containerd[1543]: time="2025-09-05T00:37:54.588649892Z" level=info msg="RemovePodSandbox \"4302a3aade8da3b4790bb33cc163ef22d3ba9be3c51bdf07e7b8b40ecd06dcb5\" returns successfully" Sep 5 00:37:54.589327 containerd[1543]: time="2025-09-05T00:37:54.589304411Z" level=info msg="StopPodSandbox for \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\"" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.626 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1", Pod:"calico-apiserver-7d5cb679cb-llcvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82e19447ccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.626 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.626 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" iface="eth0" netns="" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.626 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.626 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.652 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.652 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.652 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.662 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.662 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.663 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.668486 containerd[1543]: 2025-09-05 00:37:54.666 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.669194 containerd[1543]: time="2025-09-05T00:37:54.668527842Z" level=info msg="TearDown network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\" successfully" Sep 5 00:37:54.669194 containerd[1543]: time="2025-09-05T00:37:54.668551842Z" level=info msg="StopPodSandbox for \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\" returns successfully" Sep 5 00:37:54.669194 containerd[1543]: time="2025-09-05T00:37:54.669150601Z" level=info msg="RemovePodSandbox for \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\"" Sep 5 00:37:54.669271 containerd[1543]: time="2025-09-05T00:37:54.669211321Z" level=info msg="Forcibly stopping sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\"" Sep 5 00:37:54.734672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2420723849.mount: Deactivated successfully. Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.710 [WARNING][5834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0", GenerateName:"calico-apiserver-7d5cb679cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8bf72a2-a8ab-4666-8ec0-44f6b53426bd", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5cb679cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0948cb19b10332b5ae9cbe7a0bb5c25f22569c07d9c7d88138d7cf4f239838e1", Pod:"calico-apiserver-7d5cb679cb-llcvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82e19447ccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.710 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.710 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" iface="eth0" netns="" Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.710 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.710 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.739 [INFO][5843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.739 [INFO][5843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.739 [INFO][5843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.749 [WARNING][5843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.749 [INFO][5843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" HandleID="k8s-pod-network.35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Workload="localhost-k8s-calico--apiserver--7d5cb679cb--llcvx-eth0" Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.751 [INFO][5843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.757285 containerd[1543]: 2025-09-05 00:37:54.755 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a" Sep 5 00:37:54.757285 containerd[1543]: time="2025-09-05T00:37:54.757280178Z" level=info msg="TearDown network for sandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\" successfully" Sep 5 00:37:54.772816 containerd[1543]: time="2025-09-05T00:37:54.772716873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:54.772816 containerd[1543]: time="2025-09-05T00:37:54.772810993Z" level=info msg="RemovePodSandbox \"35e79419ce7df4c4258e4cbc685655bfd77538842b4952e7905950238fb3707a\" returns successfully" Sep 5 00:37:54.774020 containerd[1543]: time="2025-09-05T00:37:54.773696272Z" level=info msg="StopPodSandbox for \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\"" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.812 [WARNING][5866] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2cdc98cd-288c-46fa-8622-2d2669974b33", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3", Pod:"coredns-7c65d6cfc9-n6pct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4827ecddfcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.812 [INFO][5866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.812 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" iface="eth0" netns="" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.812 [INFO][5866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.812 [INFO][5866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.840 [INFO][5875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.840 [INFO][5875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.840 [INFO][5875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.849 [WARNING][5875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.849 [INFO][5875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.850 [INFO][5875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.854474 containerd[1543]: 2025-09-05 00:37:54.852 [INFO][5866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.855110 containerd[1543]: time="2025-09-05T00:37:54.854507901Z" level=info msg="TearDown network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\" successfully" Sep 5 00:37:54.855110 containerd[1543]: time="2025-09-05T00:37:54.854531581Z" level=info msg="StopPodSandbox for \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\" returns successfully" Sep 5 00:37:54.855110 containerd[1543]: time="2025-09-05T00:37:54.854973620Z" level=info msg="RemovePodSandbox for \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\"" Sep 5 00:37:54.855110 containerd[1543]: time="2025-09-05T00:37:54.855003100Z" level=info msg="Forcibly stopping sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\"" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.893 [WARNING][5893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2cdc98cd-288c-46fa-8622-2d2669974b33", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49f8d074203bfd2e468bb215134220433ed456a8bc498ce2ed16d4a309133ae3", Pod:"coredns-7c65d6cfc9-n6pct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4827ecddfcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.894 [INFO][5893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.894 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" iface="eth0" netns="" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.894 [INFO][5893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.894 [INFO][5893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.923 [INFO][5902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.923 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.924 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.936 [WARNING][5902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.936 [INFO][5902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" HandleID="k8s-pod-network.f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Workload="localhost-k8s-coredns--7c65d6cfc9--n6pct-eth0" Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.940 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:54.947479 containerd[1543]: 2025-09-05 00:37:54.943 [INFO][5893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2" Sep 5 00:37:54.947479 containerd[1543]: time="2025-09-05T00:37:54.946385871Z" level=info msg="TearDown network for sandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\" successfully" Sep 5 00:37:55.040655 containerd[1543]: time="2025-09-05T00:37:55.040609199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:55.040774 containerd[1543]: time="2025-09-05T00:37:55.040687759Z" level=info msg="RemovePodSandbox \"f0e0641a9c72bc6a5b8e2ba938e63e0278ab3b9540ff61de117a21486477aeb2\" returns successfully" Sep 5 00:37:55.041305 containerd[1543]: time="2025-09-05T00:37:55.041192918Z" level=info msg="StopPodSandbox for \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\"" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.082 [WARNING][5920] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba02548f-dd9a-487d-9c64-7235820cae6f", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5", Pod:"coredns-7c65d6cfc9-qd2ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52f51ed53b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.083 [INFO][5920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.083 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" iface="eth0" netns="" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.083 [INFO][5920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.083 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.107 [INFO][5928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.107 [INFO][5928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.107 [INFO][5928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.118 [WARNING][5928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.118 [INFO][5928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.120 [INFO][5928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:55.125251 containerd[1543]: 2025-09-05 00:37:55.122 [INFO][5920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.125992 containerd[1543]: time="2025-09-05T00:37:55.125295543Z" level=info msg="TearDown network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\" successfully" Sep 5 00:37:55.125992 containerd[1543]: time="2025-09-05T00:37:55.125331223Z" level=info msg="StopPodSandbox for \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\" returns successfully" Sep 5 00:37:55.125992 containerd[1543]: time="2025-09-05T00:37:55.125823102Z" level=info msg="RemovePodSandbox for \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\"" Sep 5 00:37:55.125992 containerd[1543]: time="2025-09-05T00:37:55.125850542Z" level=info msg="Forcibly stopping sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\"" Sep 5 00:37:55.160934 containerd[1543]: time="2025-09-05T00:37:55.160694206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:55.161199 containerd[1543]: time="2025-09-05T00:37:55.161173886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 00:37:55.162566 containerd[1543]: time="2025-09-05T00:37:55.162532723Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:55.174953 containerd[1543]: time="2025-09-05T00:37:55.174808984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:55.176473 containerd[1543]: time="2025-09-05T00:37:55.176421181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.911083309s" Sep 5 00:37:55.176473 containerd[1543]: time="2025-09-05T00:37:55.176466061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 00:37:55.178862 containerd[1543]: time="2025-09-05T00:37:55.178032499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:37:55.179209 containerd[1543]: time="2025-09-05T00:37:55.179184137Z" level=info msg="CreateContainer within sandbox \"3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:37:55.192660 containerd[1543]: time="2025-09-05T00:37:55.192603915Z" level=info msg="CreateContainer within sandbox \"3ba33afcdb656cc6253b6f4e2fde55a70326043b4d9edda428721d4ae7f43c3d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"318cfa4d61a1b499635350f6667ef89a52627a8a57047ed3e32225f90c594fd8\"" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.158 [WARNING][5945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba02548f-dd9a-487d-9c64-7235820cae6f", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 36, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03bd4911795bee55377792318f554841f1af0fb2da4616d6b75671df87f8e2c5", Pod:"coredns-7c65d6cfc9-qd2ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52f51ed53b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.158 [INFO][5945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.158 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" iface="eth0" netns="" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.158 [INFO][5945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.158 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.182 [INFO][5958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.182 [INFO][5958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.182 [INFO][5958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.190 [WARNING][5958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.190 [INFO][5958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" HandleID="k8s-pod-network.4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Workload="localhost-k8s-coredns--7c65d6cfc9--qd2ch-eth0" Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.192 [INFO][5958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:55.197332 containerd[1543]: 2025-09-05 00:37:55.194 [INFO][5945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede" Sep 5 00:37:55.197837 containerd[1543]: time="2025-09-05T00:37:55.197365668Z" level=info msg="TearDown network for sandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\" successfully" Sep 5 00:37:55.197922 containerd[1543]: time="2025-09-05T00:37:55.197840667Z" level=info msg="StartContainer for \"318cfa4d61a1b499635350f6667ef89a52627a8a57047ed3e32225f90c594fd8\"" Sep 5 00:37:55.202503 containerd[1543]: time="2025-09-05T00:37:55.202458299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:55.202595 containerd[1543]: time="2025-09-05T00:37:55.202530539Z" level=info msg="RemovePodSandbox \"4ab195063d02b5ec87a1ac46a52544e0e2b50f57cc4591d3e226dcc5887ffede\" returns successfully" Sep 5 00:37:55.202974 containerd[1543]: time="2025-09-05T00:37:55.202949659Z" level=info msg="StopPodSandbox for \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\"" Sep 5 00:37:55.262955 containerd[1543]: time="2025-09-05T00:37:55.262890362Z" level=info msg="StartContainer for \"318cfa4d61a1b499635350f6667ef89a52627a8a57047ed3e32225f90c594fd8\" returns successfully" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.240 [WARNING][5983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lpmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b629d959-9ce0-4662-8a0e-a74c6f7f28b5", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881", Pod:"csi-node-driver-8lpmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94694352815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.240 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.240 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" iface="eth0" netns="" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.240 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.240 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.272 [INFO][6009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.272 [INFO][6009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.272 [INFO][6009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.281 [WARNING][6009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.281 [INFO][6009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.282 [INFO][6009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:55.285827 containerd[1543]: 2025-09-05 00:37:55.284 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.287146 containerd[1543]: time="2025-09-05T00:37:55.285867645Z" level=info msg="TearDown network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\" successfully" Sep 5 00:37:55.287146 containerd[1543]: time="2025-09-05T00:37:55.285892725Z" level=info msg="StopPodSandbox for \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\" returns successfully" Sep 5 00:37:55.287146 containerd[1543]: time="2025-09-05T00:37:55.286337925Z" level=info msg="RemovePodSandbox for \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\"" Sep 5 00:37:55.287146 containerd[1543]: time="2025-09-05T00:37:55.286367045Z" level=info msg="Forcibly stopping sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\"" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.334 [WARNING][6038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8lpmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b629d959-9ce0-4662-8a0e-a74c6f7f28b5", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881", Pod:"csi-node-driver-8lpmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94694352815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.334 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.334 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" iface="eth0" netns="" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.334 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.334 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.354 [INFO][6050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.354 [INFO][6050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.354 [INFO][6050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.362 [WARNING][6050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.362 [INFO][6050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" HandleID="k8s-pod-network.e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Workload="localhost-k8s-csi--node--driver--8lpmx-eth0" Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.364 [INFO][6050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:55.367240 containerd[1543]: 2025-09-05 00:37:55.365 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd" Sep 5 00:37:55.367240 containerd[1543]: time="2025-09-05T00:37:55.367312875Z" level=info msg="TearDown network for sandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\" successfully" Sep 5 00:37:55.370573 containerd[1543]: time="2025-09-05T00:37:55.370544989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:55.370763 containerd[1543]: time="2025-09-05T00:37:55.370743789Z" level=info msg="RemovePodSandbox \"e14e7170b4b49aebc39a91c3c684c0daa47d5aafc52acba8d84b58bf21434fcd\" returns successfully" Sep 5 00:37:55.371421 containerd[1543]: time="2025-09-05T00:37:55.371399668Z" level=info msg="StopPodSandbox for \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\"" Sep 5 00:37:55.414456 kubelet[2634]: I0905 00:37:55.414332 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-5d4st" podStartSLOduration=25.662140749 podStartE2EDuration="45.4138686s" podCreationTimestamp="2025-09-05 00:37:10 +0000 UTC" firstStartedPulling="2025-09-05 00:37:35.425483489 +0000 UTC m=+41.457157177" lastFinishedPulling="2025-09-05 00:37:55.17721126 +0000 UTC m=+61.208885028" observedRunningTime="2025-09-05 00:37:55.413052881 +0000 UTC m=+61.444726569" watchObservedRunningTime="2025-09-05 00:37:55.4138686 +0000 UTC m=+61.445542328" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.411 [WARNING][6069] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0", GenerateName:"calico-kube-controllers-788bb96d6f-", Namespace:"calico-system", SelfLink:"", UID:"92604847-f3c6-4b7c-9399-1aa229b56af1", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"788bb96d6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966", Pod:"calico-kube-controllers-788bb96d6f-s5fmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia038c45af76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.412 [INFO][6069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.412 [INFO][6069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" iface="eth0" netns="" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.412 [INFO][6069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.412 [INFO][6069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.437 [INFO][6084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.437 [INFO][6084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.437 [INFO][6084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.446 [WARNING][6084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.446 [INFO][6084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.447 [INFO][6084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:55.451479 containerd[1543]: 2025-09-05 00:37:55.449 [INFO][6069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.452430 containerd[1543]: time="2025-09-05T00:37:55.452368098Z" level=info msg="TearDown network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\" successfully" Sep 5 00:37:55.452562 containerd[1543]: time="2025-09-05T00:37:55.452545138Z" level=info msg="StopPodSandbox for \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\" returns successfully" Sep 5 00:37:55.453081 containerd[1543]: time="2025-09-05T00:37:55.453057017Z" level=info msg="RemovePodSandbox for \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\"" Sep 5 00:37:55.453130 containerd[1543]: time="2025-09-05T00:37:55.453089257Z" level=info msg="Forcibly stopping sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\"" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.486 [WARNING][6117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0", GenerateName:"calico-kube-controllers-788bb96d6f-", Namespace:"calico-system", SelfLink:"", UID:"92604847-f3c6-4b7c-9399-1aa229b56af1", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"788bb96d6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55e092ab8734f8234fe0dd4030f936039638abeba25ea93ab0345f398dd02966", Pod:"calico-kube-controllers-788bb96d6f-s5fmw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia038c45af76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.486 [INFO][6117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.486 [INFO][6117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" iface="eth0" netns="" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.486 [INFO][6117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.486 [INFO][6117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.510 [INFO][6126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.511 [INFO][6126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.511 [INFO][6126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.519 [WARNING][6126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.519 [INFO][6126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" HandleID="k8s-pod-network.2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Workload="localhost-k8s-calico--kube--controllers--788bb96d6f--s5fmw-eth0" Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.520 [INFO][6126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:37:55.523694 containerd[1543]: 2025-09-05 00:37:55.521 [INFO][6117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041" Sep 5 00:37:55.524142 containerd[1543]: time="2025-09-05T00:37:55.523737824Z" level=info msg="TearDown network for sandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\" successfully" Sep 5 00:37:55.526795 containerd[1543]: time="2025-09-05T00:37:55.526761299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:37:55.526848 containerd[1543]: time="2025-09-05T00:37:55.526832699Z" level=info msg="RemovePodSandbox \"2dac183d0a8a83bb5ec3f09aa1f5bd259c7fad40bdd270d41e1932ce1e627041\" returns successfully" Sep 5 00:37:55.656137 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:55924.service - OpenSSH per-connection server daemon (10.0.0.1:55924). Sep 5 00:37:55.703222 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 55924 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:37:55.705098 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:55.709055 systemd-logind[1518]: New session 13 of user core. Sep 5 00:37:55.718145 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:37:56.139641 sshd[6136]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:56.142888 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:55924.service: Deactivated successfully. Sep 5 00:37:56.145849 systemd-logind[1518]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:37:56.146390 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:37:56.147164 systemd-logind[1518]: Removed session 13. Sep 5 00:37:56.867750 containerd[1543]: time="2025-09-05T00:37:56.867702921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:56.868802 containerd[1543]: time="2025-09-05T00:37:56.868765039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 00:37:56.869919 containerd[1543]: time="2025-09-05T00:37:56.869881797Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:56.871813 containerd[1543]: time="2025-09-05T00:37:56.871770594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:56.872723 containerd[1543]: time="2025-09-05T00:37:56.872684473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.694595935s" Sep 5 00:37:56.872757 containerd[1543]: time="2025-09-05T00:37:56.872722233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 00:37:56.874833 containerd[1543]: time="2025-09-05T00:37:56.874803189Z" level=info msg="CreateContainer within sandbox \"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:37:56.884510 containerd[1543]: time="2025-09-05T00:37:56.884282654Z" level=info msg="CreateContainer within sandbox \"802e3b66e4cd3fcec7156c834fc72560dd53dc689a7832417e7d7727635eb881\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c1b9e51eb8a63cc95048228d069a69a11ad46252901cd23631ab1c7d9cbbba25\"" Sep 5 00:37:56.884725 containerd[1543]: time="2025-09-05T00:37:56.884697454Z" level=info msg="StartContainer for \"c1b9e51eb8a63cc95048228d069a69a11ad46252901cd23631ab1c7d9cbbba25\"" Sep 5 00:37:56.927021 containerd[1543]: time="2025-09-05T00:37:56.926983266Z" level=info msg="StartContainer for \"c1b9e51eb8a63cc95048228d069a69a11ad46252901cd23631ab1c7d9cbbba25\" returns successfully" Sep 5 00:37:57.183991 kubelet[2634]: I0905 00:37:57.183953 2634 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:37:57.186364 kubelet[2634]: I0905 00:37:57.186340 2634 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:38:01.151149 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:60106.service - OpenSSH per-connection server daemon (10.0.0.1:60106). Sep 5 00:38:01.193011 sshd[6248]: Accepted publickey for core from 10.0.0.1 port 60106 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:01.194355 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:01.197908 systemd-logind[1518]: New session 14 of user core. Sep 5 00:38:01.207178 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:38:01.359281 sshd[6248]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:01.362116 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:60106.service: Deactivated successfully. Sep 5 00:38:01.364835 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:38:01.365143 systemd-logind[1518]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:38:01.366326 systemd-logind[1518]: Removed session 14. Sep 5 00:38:02.979718 kubelet[2634]: I0905 00:38:02.979668 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:38:03.004498 kubelet[2634]: I0905 00:38:03.004397 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8lpmx" podStartSLOduration=30.582627141 podStartE2EDuration="53.004383471s" podCreationTimestamp="2025-09-05 00:37:10 +0000 UTC" firstStartedPulling="2025-09-05 00:37:34.451800341 +0000 UTC m=+40.483474069" lastFinishedPulling="2025-09-05 00:37:56.873556671 +0000 UTC m=+62.905230399" observedRunningTime="2025-09-05 00:37:57.421946287 +0000 UTC m=+63.453620015" watchObservedRunningTime="2025-09-05 00:38:03.004383471 +0000 UTC m=+69.036057199" Sep 5 00:38:06.371130 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:60112.service - OpenSSH per-connection server daemon (10.0.0.1:60112). Sep 5 00:38:06.405560 sshd[6267]: Accepted publickey for core from 10.0.0.1 port 60112 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:06.406825 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:06.410129 systemd-logind[1518]: New session 15 of user core. Sep 5 00:38:06.418133 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:38:06.536927 sshd[6267]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:06.539357 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:60112.service: Deactivated successfully. Sep 5 00:38:06.541679 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:38:06.542876 systemd-logind[1518]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:38:06.543631 systemd-logind[1518]: Removed session 15. Sep 5 00:38:09.829649 kubelet[2634]: I0905 00:38:09.829422 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:38:11.551377 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:58720.service - OpenSSH per-connection server daemon (10.0.0.1:58720). Sep 5 00:38:11.610925 sshd[6328]: Accepted publickey for core from 10.0.0.1 port 58720 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:11.615677 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:11.638039 systemd-logind[1518]: New session 16 of user core. Sep 5 00:38:11.646991 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:38:11.910232 sshd[6328]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:11.918204 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:58730.service - OpenSSH per-connection server daemon (10.0.0.1:58730). Sep 5 00:38:11.918575 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:58720.service: Deactivated successfully. Sep 5 00:38:11.922846 systemd-logind[1518]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:38:11.922925 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:38:11.925617 systemd-logind[1518]: Removed session 16. Sep 5 00:38:11.963282 sshd[6341]: Accepted publickey for core from 10.0.0.1 port 58730 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:11.965052 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:11.972629 systemd-logind[1518]: New session 17 of user core. Sep 5 00:38:11.979168 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:38:12.209466 sshd[6341]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:12.218260 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:58736.service - OpenSSH per-connection server daemon (10.0.0.1:58736). Sep 5 00:38:12.219259 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:58730.service: Deactivated successfully. Sep 5 00:38:12.221344 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:38:12.223079 systemd-logind[1518]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:38:12.224091 systemd-logind[1518]: Removed session 17. Sep 5 00:38:12.257115 sshd[6355]: Accepted publickey for core from 10.0.0.1 port 58736 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:12.258368 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:12.262134 systemd-logind[1518]: New session 18 of user core. Sep 5 00:38:12.273154 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:38:13.946284 sshd[6355]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:13.957175 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:58742.service - OpenSSH per-connection server daemon (10.0.0.1:58742). Sep 5 00:38:13.957535 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:58736.service: Deactivated successfully. Sep 5 00:38:13.963991 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:38:13.970670 systemd-logind[1518]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:38:13.974663 systemd-logind[1518]: Removed session 18. Sep 5 00:38:14.016674 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 58742 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:14.018052 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:14.022206 systemd-logind[1518]: New session 19 of user core. Sep 5 00:38:14.032216 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:38:14.704868 sshd[6375]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:14.714305 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:58752.service - OpenSSH per-connection server daemon (10.0.0.1:58752). Sep 5 00:38:14.714838 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:58742.service: Deactivated successfully. Sep 5 00:38:14.718308 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:38:14.719080 systemd-logind[1518]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:38:14.720229 systemd-logind[1518]: Removed session 19. Sep 5 00:38:14.754385 sshd[6391]: Accepted publickey for core from 10.0.0.1 port 58752 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:14.755798 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:14.759915 systemd-logind[1518]: New session 20 of user core. Sep 5 00:38:14.767223 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:38:14.889615 sshd[6391]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:14.893004 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:58752.service: Deactivated successfully. Sep 5 00:38:14.895646 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:38:14.896424 systemd-logind[1518]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:38:14.897234 systemd-logind[1518]: Removed session 20. Sep 5 00:38:19.901133 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:58756.service - OpenSSH per-connection server daemon (10.0.0.1:58756). Sep 5 00:38:19.939156 sshd[6439]: Accepted publickey for core from 10.0.0.1 port 58756 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:19.940643 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:19.944499 systemd-logind[1518]: New session 21 of user core. Sep 5 00:38:19.959181 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:38:20.073634 kubelet[2634]: E0905 00:38:20.073600 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:20.083731 sshd[6439]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:20.086226 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:58756.service: Deactivated successfully. Sep 5 00:38:20.089356 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:38:20.089596 systemd-logind[1518]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:38:20.091318 systemd-logind[1518]: Removed session 21. Sep 5 00:38:25.102209 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:33646.service - OpenSSH per-connection server daemon (10.0.0.1:33646). Sep 5 00:38:25.149375 sshd[6479]: Accepted publickey for core from 10.0.0.1 port 33646 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:25.150951 sshd[6479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:25.156973 systemd-logind[1518]: New session 22 of user core. Sep 5 00:38:25.168189 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:38:25.341330 sshd[6479]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:25.345668 systemd-logind[1518]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:38:25.346343 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:33646.service: Deactivated successfully. Sep 5 00:38:25.348527 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:38:25.349126 systemd-logind[1518]: Removed session 22. Sep 5 00:38:27.072872 kubelet[2634]: E0905 00:38:27.072823 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:29.072626 kubelet[2634]: E0905 00:38:29.072583 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:30.073367 kubelet[2634]: E0905 00:38:30.073324 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:30.356157 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:50998.service - OpenSSH per-connection server daemon (10.0.0.1:50998). Sep 5 00:38:30.394668 sshd[6520]: Accepted publickey for core from 10.0.0.1 port 50998 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:30.396095 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:30.403218 systemd-logind[1518]: New session 23 of user core. Sep 5 00:38:30.411639 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:38:30.614210 sshd[6520]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:30.617335 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:50998.service: Deactivated successfully. Sep 5 00:38:30.621122 systemd-logind[1518]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:38:30.621190 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:38:30.622637 systemd-logind[1518]: Removed session 23. Sep 5 00:38:35.636315 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:51000.service - OpenSSH per-connection server daemon (10.0.0.1:51000). Sep 5 00:38:35.694396 sshd[6537]: Accepted publickey for core from 10.0.0.1 port 51000 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:38:35.695699 sshd[6537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:35.702994 systemd-logind[1518]: New session 24 of user core. Sep 5 00:38:35.715172 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:38:36.176344 sshd[6537]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:36.180083 systemd-logind[1518]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:38:36.181218 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:51000.service: Deactivated successfully. Sep 5 00:38:36.183999 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:38:36.185070 systemd-logind[1518]: Removed session 24.