Aug 13 00:21:52.898159 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:21:52.898181 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:21:52.898190 kernel: KASLR enabled Aug 13 00:21:52.898196 kernel: efi: EFI v2.7 by EDK II Aug 13 00:21:52.898202 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 13 00:21:52.898207 kernel: random: crng init done Aug 13 00:21:52.898214 kernel: ACPI: Early table checksum verification disabled Aug 13 00:21:52.898220 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 13 00:21:52.898226 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:21:52.898234 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898240 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898245 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898251 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898257 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898265 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898273 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898279 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898285 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:21:52.898292 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 13 00:21:52.898298 kernel: NUMA: Failed to initialise from firmware Aug 13 00:21:52.898304 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:21:52.898311 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Aug 13 00:21:52.898317 kernel: Zone ranges: Aug 13 00:21:52.898324 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:21:52.898330 kernel: DMA32 empty Aug 13 00:21:52.898337 kernel: Normal empty Aug 13 00:21:52.898343 kernel: Movable zone start for each node Aug 13 00:21:52.898349 kernel: Early memory node ranges Aug 13 00:21:52.898356 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 13 00:21:52.898362 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 13 00:21:52.898368 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 13 00:21:52.898375 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 13 00:21:52.898381 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 13 00:21:52.898387 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 13 00:21:52.898394 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 13 00:21:52.898400 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:21:52.898406 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 13 00:21:52.898414 kernel: psci: probing for conduit method from ACPI. Aug 13 00:21:52.898420 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:21:52.898427 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:21:52.898436 kernel: psci: Trusted OS migration not required Aug 13 00:21:52.898442 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:21:52.898449 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:21:52.898457 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:21:52.898464 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:21:52.898471 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 13 00:21:52.898478 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:21:52.898485 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:21:52.898491 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:21:52.898498 kernel: CPU features: detected: Spectre-v4 Aug 13 00:21:52.898504 kernel: CPU features: detected: Spectre-BHB Aug 13 00:21:52.898511 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:21:52.898518 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:21:52.898526 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:21:52.898533 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:21:52.898539 kernel: alternatives: applying boot alternatives Aug 13 00:21:52.898547 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:21:52.898554 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:21:52.898560 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:21:52.898567 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:21:52.898574 kernel: Fallback order for Node 0: 0 Aug 13 00:21:52.898580 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 13 00:21:52.898587 kernel: Policy zone: DMA Aug 13 00:21:52.898594 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:21:52.898602 kernel: software IO TLB: area num 4. Aug 13 00:21:52.898609 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 13 00:21:52.898616 kernel: Memory: 2386396K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185892K reserved, 0K cma-reserved) Aug 13 00:21:52.898622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:21:52.898636 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:21:52.898643 kernel: rcu: RCU event tracing is enabled. Aug 13 00:21:52.898650 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:21:52.898657 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:21:52.898664 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:21:52.898671 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:21:52.898677 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:21:52.898686 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:21:52.898693 kernel: GICv3: 256 SPIs implemented Aug 13 00:21:52.898700 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:21:52.898707 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:21:52.898713 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:21:52.898720 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:21:52.898727 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:21:52.898733 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:21:52.898740 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:21:52.898747 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 13 00:21:52.898754 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 13 00:21:52.898761 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:21:52.898770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:21:52.898777 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:21:52.898784 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:21:52.898790 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:21:52.898797 kernel: arm-pv: using stolen time PV Aug 13 00:21:52.898804 kernel: Console: colour dummy device 80x25 Aug 13 00:21:52.898811 kernel: ACPI: Core revision 20230628 Aug 13 00:21:52.898818 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:21:52.898825 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:21:52.898832 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:21:52.898840 kernel: landlock: Up and running. Aug 13 00:21:52.898847 kernel: SELinux: Initializing. Aug 13 00:21:52.898854 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:21:52.898861 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:21:52.898868 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:21:52.898876 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:21:52.898883 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:21:52.898890 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:21:52.898897 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:21:52.898905 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:21:52.898911 kernel: Remapping and enabling EFI services. Aug 13 00:21:52.898918 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:21:52.898925 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:21:52.898932 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:21:52.898939 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 13 00:21:52.898946 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:21:52.898953 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:21:52.898960 kernel: Detected PIPT I-cache on CPU2 Aug 13 00:21:52.898967 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 13 00:21:52.898975 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 13 00:21:52.898983 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:21:52.898994 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 13 00:21:52.899003 kernel: Detected PIPT I-cache on CPU3 Aug 13 00:21:52.899010 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 13 00:21:52.899018 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 13 00:21:52.899025 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:21:52.899032 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 13 00:21:52.899039 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:21:52.899048 kernel: SMP: Total of 4 processors activated. Aug 13 00:21:52.899055 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:21:52.899062 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:21:52.899069 kernel: CPU features: detected: Common not Private translations Aug 13 00:21:52.899103 kernel: CPU features: detected: CRC32 instructions Aug 13 00:21:52.899111 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 00:21:52.899118 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:21:52.899125 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:21:52.899135 kernel: CPU features: detected: Privileged Access Never Aug 13 00:21:52.899143 kernel: CPU features: detected: RAS Extension Support Aug 13 00:21:52.899150 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:21:52.899157 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:21:52.899164 kernel: alternatives: applying system-wide alternatives Aug 13 00:21:52.899171 kernel: devtmpfs: initialized Aug 13 00:21:52.899179 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:21:52.899186 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:21:52.899193 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:21:52.899202 kernel: SMBIOS 3.0.0 present. Aug 13 00:21:52.899209 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 13 00:21:52.899216 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:21:52.899224 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:21:52.899231 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:21:52.899239 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:21:52.899246 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:21:52.899253 kernel: audit: type=2000 audit(0.028:1): state=initialized audit_enabled=0 res=1 Aug 13 00:21:52.899260 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:21:52.899269 kernel: cpuidle: using governor menu Aug 13 00:21:52.899276 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:21:52.899283 kernel: ASID allocator initialised with 32768 entries Aug 13 00:21:52.899291 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:21:52.899298 kernel: Serial: AMBA PL011 UART driver Aug 13 00:21:52.899305 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:21:52.899312 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:21:52.899319 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:21:52.899326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:21:52.899335 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:21:52.899342 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:21:52.899349 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:21:52.899356 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:21:52.899364 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:21:52.899371 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:21:52.899378 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:21:52.899385 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:21:52.899392 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:21:52.899401 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:21:52.899408 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:21:52.899415 kernel: ACPI: Interpreter enabled Aug 13 00:21:52.899422 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:21:52.899429 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:21:52.899436 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:21:52.899443 kernel: printk: console [ttyAMA0] enabled Aug 13 00:21:52.899451 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:21:52.899584 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:21:52.899672 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:21:52.899739 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:21:52.899803 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:21:52.899867 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:21:52.899877 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:21:52.899884 kernel: PCI host bridge to bus 0000:00 Aug 13 00:21:52.899954 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:21:52.900017 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:21:52.900097 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:21:52.900163 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:21:52.900248 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:21:52.900325 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:21:52.900394 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 13 00:21:52.900465 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 13 00:21:52.900531 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:21:52.900597 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:21:52.900676 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 13 00:21:52.900745 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 13 00:21:52.900805 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:21:52.900864 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:21:52.900925 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:21:52.900935 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:21:52.900943 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:21:52.900950 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:21:52.900958 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:21:52.900966 kernel: iommu: Default domain type: Translated Aug 13 00:21:52.900974 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:21:52.900982 kernel: efivars: Registered efivars operations Aug 13 00:21:52.901004 kernel: vgaarb: loaded Aug 13 00:21:52.901014 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:21:52.901022 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:21:52.901030 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:21:52.901037 kernel: pnp: PnP ACPI init Aug 13 00:21:52.901140 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:21:52.901152 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:21:52.901160 kernel: NET: Registered PF_INET protocol family Aug 13 00:21:52.901167 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:21:52.901179 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:21:52.901186 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:21:52.901194 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:21:52.901201 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:21:52.901209 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:21:52.901216 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:21:52.901223 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:21:52.901231 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:21:52.901238 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:21:52.901247 kernel: kvm [1]: HYP mode not available Aug 13 00:21:52.901254 kernel: Initialise system trusted keyrings Aug 13 00:21:52.901262 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:21:52.901269 kernel: Key type asymmetric registered Aug 13 00:21:52.901276 kernel: Asymmetric key parser 'x509' registered Aug 13 00:21:52.901283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:21:52.901291 kernel: io scheduler mq-deadline registered Aug 13 00:21:52.901298 kernel: io scheduler kyber registered Aug 13 00:21:52.901305 kernel: io scheduler bfq registered Aug 13 00:21:52.901314 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:21:52.901322 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:21:52.901329 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:21:52.901397 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 13 00:21:52.901407 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:21:52.901414 kernel: thunder_xcv, ver 1.0 Aug 13 00:21:52.901422 kernel: thunder_bgx, ver 1.0 Aug 13 00:21:52.901429 kernel: nicpf, ver 1.0 Aug 13 00:21:52.901436 kernel: nicvf, ver 1.0 Aug 13 00:21:52.901511 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:21:52.901574 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:21:52 UTC (1755044512) Aug 13 00:21:52.901584 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:21:52.901591 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:21:52.901599 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:21:52.901606 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:21:52.901614 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:21:52.901621 kernel: Segment Routing with IPv6 Aug 13 00:21:52.901638 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:21:52.901645 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:21:52.901652 kernel: Key type dns_resolver registered Aug 13 00:21:52.901659 kernel: registered taskstats version 1 Aug 13 00:21:52.901667 kernel: Loading compiled-in X.509 certificates Aug 13 00:21:52.901674 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:21:52.901681 kernel: Key type .fscrypt registered Aug 13 00:21:52.901688 kernel: Key type fscrypt-provisioning registered Aug 13 00:21:52.901695 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:21:52.901705 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:21:52.901713 kernel: ima: No architecture policies found Aug 13 00:21:52.901720 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:21:52.901728 kernel: clk: Disabling unused clocks Aug 13 00:21:52.901735 kernel: Freeing unused kernel memory: 39424K Aug 13 00:21:52.901742 kernel: Run /init as init process Aug 13 00:21:52.901750 kernel: with arguments: Aug 13 00:21:52.901757 kernel: /init Aug 13 00:21:52.901764 kernel: with environment: Aug 13 00:21:52.901773 kernel: HOME=/ Aug 13 00:21:52.901780 kernel: TERM=linux Aug 13 00:21:52.901787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:21:52.901797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:21:52.901806 systemd[1]: Detected virtualization kvm. Aug 13 00:21:52.901814 systemd[1]: Detected architecture arm64. Aug 13 00:21:52.901821 systemd[1]: Running in initrd. Aug 13 00:21:52.901830 systemd[1]: No hostname configured, using default hostname. Aug 13 00:21:52.901838 systemd[1]: Hostname set to . Aug 13 00:21:52.901846 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:21:52.901854 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:21:52.901862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:21:52.901869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:21:52.901878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:21:52.901886 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:21:52.901895 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:21:52.901903 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:21:52.901913 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:21:52.901921 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:21:52.901928 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:21:52.901936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:21:52.901944 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:21:52.901954 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:21:52.901961 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:21:52.901969 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:21:52.901977 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:21:52.901985 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:21:52.901993 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:21:52.902000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:21:52.902008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:21:52.902016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:21:52.902029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:21:52.902037 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:21:52.902045 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:21:52.902053 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:21:52.902060 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:21:52.902068 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:21:52.902138 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:21:52.902148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:21:52.902160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:52.902281 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:21:52.902292 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:21:52.902300 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:21:52.902309 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:21:52.902322 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:52.902430 systemd-journald[240]: Collecting audit messages is disabled. Aug 13 00:21:52.902456 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:21:52.902467 systemd-journald[240]: Journal started Aug 13 00:21:52.902489 systemd-journald[240]: Runtime Journal (/run/log/journal/560da505ede04caa8195fe28453aa077) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:21:52.910299 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:21:52.910342 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:21:52.893398 systemd-modules-load[241]: Inserted module 'overlay' Aug 13 00:21:52.911407 kernel: Bridge firewalling registered Aug 13 00:21:52.910822 systemd-modules-load[241]: Inserted module 'br_netfilter' Aug 13 00:21:52.913759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:21:52.915163 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:21:52.916158 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:21:52.921571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:21:52.925264 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:21:52.926328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:21:52.929333 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:21:52.933826 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:21:52.936461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:21:52.937506 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:21:52.948777 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:21:52.961527 dracut-cmdline[273]: dracut-dracut-053 Aug 13 00:21:52.964280 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:21:52.981168 systemd-resolved[277]: Positive Trust Anchors: Aug 13 00:21:52.981184 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:21:52.981216 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:21:52.985876 systemd-resolved[277]: Defaulting to hostname 'linux'. Aug 13 00:21:52.986816 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:21:52.988656 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:21:53.036107 kernel: SCSI subsystem initialized Aug 13 00:21:53.041097 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:21:53.051126 kernel: iscsi: registered transport (tcp) Aug 13 00:21:53.064108 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:21:53.064139 kernel: QLogic iSCSI HBA Driver Aug 13 00:21:53.107978 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:21:53.123262 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:21:53.138336 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:21:53.138388 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:21:53.139152 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:21:53.187115 kernel: raid6: neonx8 gen() 15773 MB/s Aug 13 00:21:53.204113 kernel: raid6: neonx4 gen() 15637 MB/s Aug 13 00:21:53.224676 kernel: raid6: neonx2 gen() 13281 MB/s Aug 13 00:21:53.238127 kernel: raid6: neonx1 gen() 10475 MB/s Aug 13 00:21:53.255124 kernel: raid6: int64x8 gen() 6953 MB/s Aug 13 00:21:53.272097 kernel: raid6: int64x4 gen() 7347 MB/s Aug 13 00:21:53.289096 kernel: raid6: int64x2 gen() 6124 MB/s Aug 13 00:21:53.306096 kernel: raid6: int64x1 gen() 5052 MB/s Aug 13 00:21:53.306129 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s Aug 13 00:21:53.323106 kernel: raid6: .... xor() 11865 MB/s, rmw enabled Aug 13 00:21:53.323131 kernel: raid6: using neon recovery algorithm Aug 13 00:21:53.328107 kernel: xor: measuring software checksum speed Aug 13 00:21:53.329136 kernel: 8regs : 17481 MB/sec Aug 13 00:21:53.329153 kernel: 32regs : 19299 MB/sec Aug 13 00:21:53.330103 kernel: arm64_neon : 27150 MB/sec Aug 13 00:21:53.330119 kernel: xor: using function: arm64_neon (27150 MB/sec) Aug 13 00:21:53.380108 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:21:53.390570 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:21:53.398254 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:21:53.413257 systemd-udevd[459]: Using default interface naming scheme 'v255'. Aug 13 00:21:53.416456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:21:53.432268 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:21:53.443643 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Aug 13 00:21:53.469300 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:21:53.476206 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:21:53.518430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:21:53.529328 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:21:53.542118 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:21:53.543402 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:21:53.544985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:21:53.546645 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:21:53.557280 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:21:53.565879 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 13 00:21:53.566065 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:21:53.569707 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:21:53.573601 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:21:53.573732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:21:53.577473 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:21:53.578471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:21:53.584195 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:21:53.584218 kernel: GPT:9289727 != 19775487 Aug 13 00:21:53.584229 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:21:53.584238 kernel: GPT:9289727 != 19775487 Aug 13 00:21:53.584253 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:21:53.584265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:21:53.578619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:53.582443 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:53.592551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:53.603027 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (523) Aug 13 00:21:53.603085 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (512) Aug 13 00:21:53.607438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:53.613474 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:21:53.626657 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:21:53.633170 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:21:53.634234 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:21:53.640180 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:21:53.652226 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:21:53.653869 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:21:53.666487 disk-uuid[549]: Primary Header is updated. Aug 13 00:21:53.666487 disk-uuid[549]: Secondary Entries is updated. Aug 13 00:21:53.666487 disk-uuid[549]: Secondary Header is updated. Aug 13 00:21:53.674942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:21:53.684576 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:21:54.689100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:21:54.689847 disk-uuid[553]: The operation has completed successfully. Aug 13 00:21:54.716118 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:21:54.716217 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:21:54.731257 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:21:54.734115 sh[574]: Success Aug 13 00:21:54.746106 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:21:54.790555 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:21:54.792186 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:21:54.793962 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:21:54.803720 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:21:54.803766 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:21:54.805097 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:21:54.805136 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:21:54.805324 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:21:54.809435 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:21:54.810786 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:21:54.823226 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:21:54.824722 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:21:54.831832 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:21:54.831874 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:21:54.831885 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:21:54.835105 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:21:54.844153 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:21:54.845486 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:21:54.850806 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:21:54.861262 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:21:54.928198 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:21:54.941252 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:21:54.955126 ignition[666]: Ignition 2.19.0 Aug 13 00:21:54.955137 ignition[666]: Stage: fetch-offline Aug 13 00:21:54.955173 ignition[666]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:21:54.955182 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:21:54.955345 ignition[666]: parsed url from cmdline: "" Aug 13 00:21:54.955349 ignition[666]: no config URL provided Aug 13 00:21:54.955353 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:21:54.955360 ignition[666]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:21:54.955384 ignition[666]: op(1): [started] loading QEMU firmware config module Aug 13 00:21:54.955388 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:21:54.962008 ignition[666]: op(1): [finished] loading QEMU firmware config module Aug 13 00:21:54.962033 ignition[666]: QEMU firmware config was not found. Ignoring... Aug 13 00:21:54.966269 systemd-networkd[765]: lo: Link UP Aug 13 00:21:54.966281 systemd-networkd[765]: lo: Gained carrier Aug 13 00:21:54.966964 systemd-networkd[765]: Enumeration completed Aug 13 00:21:54.967192 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:21:54.967421 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:54.967424 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:21:54.968265 systemd-networkd[765]: eth0: Link UP Aug 13 00:21:54.968268 systemd-networkd[765]: eth0: Gained carrier Aug 13 00:21:54.968275 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:54.968513 systemd[1]: Reached target network.target - Network. Aug 13 00:21:54.987122 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:21:55.011997 ignition[666]: parsing config with SHA512: 47068e889855f9d33a81246503ae9f323602cf88c4d519abce528eabb952819ca1e715c0f55c8997bc38ff838f3ad439dbd6603f98c3c31907b3b93a625e5eba Aug 13 00:21:55.017628 unknown[666]: fetched base config from "system" Aug 13 00:21:55.017649 unknown[666]: fetched user config from "qemu" Aug 13 00:21:55.018554 ignition[666]: fetch-offline: fetch-offline passed Aug 13 00:21:55.018647 ignition[666]: Ignition finished successfully Aug 13 00:21:55.020304 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:21:55.021361 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:21:55.031294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:21:55.042485 ignition[771]: Ignition 2.19.0 Aug 13 00:21:55.042495 ignition[771]: Stage: kargs Aug 13 00:21:55.042681 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:21:55.042691 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:21:55.043598 ignition[771]: kargs: kargs passed Aug 13 00:21:55.043687 ignition[771]: Ignition finished successfully Aug 13 00:21:55.045680 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:21:55.047746 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:21:55.061545 ignition[780]: Ignition 2.19.0 Aug 13 00:21:55.061556 ignition[780]: Stage: disks Aug 13 00:21:55.061751 ignition[780]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:21:55.061761 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:21:55.062650 ignition[780]: disks: disks passed Aug 13 00:21:55.062700 ignition[780]: Ignition finished successfully Aug 13 00:21:55.065416 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:21:55.066359 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:21:55.067405 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:21:55.068829 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:21:55.070207 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:21:55.071737 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:21:55.082252 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:21:55.093573 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:21:55.097886 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:21:55.105195 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:21:55.147102 kernel: EXT4-fs (vda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:21:55.147467 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:21:55.148509 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:21:55.162209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:21:55.164070 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:21:55.164955 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:21:55.164996 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:21:55.165019 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:21:55.170363 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:21:55.172251 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:21:55.176101 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Aug 13 00:21:55.179841 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:21:55.179890 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:21:55.179902 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:21:55.184342 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:21:55.185179 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:21:55.239903 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:21:55.243967 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:21:55.247129 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:21:55.252156 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:21:55.357309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:21:55.367200 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:21:55.368560 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:21:55.374117 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:21:55.393629 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:21:55.398561 ignition[911]: INFO : Ignition 2.19.0 Aug 13 00:21:55.398561 ignition[911]: INFO : Stage: mount Aug 13 00:21:55.400024 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:21:55.400024 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:21:55.400024 ignition[911]: INFO : mount: mount passed Aug 13 00:21:55.400024 ignition[911]: INFO : Ignition finished successfully Aug 13 00:21:55.401560 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:21:55.411220 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:21:55.803067 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:21:55.815265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:21:55.821864 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Aug 13 00:21:55.821906 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:21:55.821917 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:21:55.823124 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:21:55.825098 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:21:55.826249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:21:55.843098 ignition[942]: INFO : Ignition 2.19.0 Aug 13 00:21:55.843098 ignition[942]: INFO : Stage: files Aug 13 00:21:55.844421 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:21:55.844421 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:21:55.844421 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:21:55.846938 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:21:55.846938 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:21:55.850287 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:21:55.851376 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:21:55.851376 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:21:55.850850 unknown[942]: wrote ssh authorized keys file for user: core Aug 13 00:21:55.854276 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:21:55.854276 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 13 00:21:55.922633 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:21:56.027207 systemd-networkd[765]: eth0: Gained IPv6LL Aug 13 00:21:56.625906 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:21:56.627703 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:21:56.642673 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:21:56.642673 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:21:56.642673 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 13 00:21:56.962006 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:21:57.426384 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:21:57.426384 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 00:21:57.429363 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:21:57.463745 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:21:57.468197 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:21:57.470162 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:21:57.470162 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:21:57.470162 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:21:57.470162 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:21:57.470162 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:21:57.470162 ignition[942]: INFO : files: files passed Aug 13 00:21:57.470162 ignition[942]: INFO : Ignition finished successfully Aug 13 00:21:57.471292 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:21:57.486291 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:21:57.488208 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:21:57.493483 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:21:57.493591 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:21:57.496321 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 00:21:57.498393 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:21:57.498393 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:21:57.501180 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:21:57.501588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:21:57.503628 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:21:57.514293 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:21:57.539115 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:21:57.539290 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:21:57.541219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:21:57.543011 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:21:57.544671 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:21:57.554269 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:21:57.570141 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:21:57.572819 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:21:57.585863 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:21:57.586877 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:21:57.588553 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:21:57.589871 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:21:57.590007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:21:57.591926 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:21:57.593470 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:21:57.594752 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:21:57.596023 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:21:57.597522 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:21:57.598976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:21:57.600378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:21:57.601873 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:21:57.603369 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:21:57.604804 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:21:57.605947 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:21:57.606094 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:21:57.607859 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:21:57.609375 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:21:57.610956 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:21:57.614142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:21:57.615149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:21:57.615277 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:21:57.617532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:21:57.617660 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:21:57.619141 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:21:57.620297 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:21:57.621189 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:21:57.622628 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:21:57.623884 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:21:57.625608 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:21:57.625753 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:21:57.626884 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:21:57.626980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:21:57.628228 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:21:57.628342 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:21:57.629727 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:21:57.629831 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:21:57.637292 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:21:57.638815 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:21:57.639594 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:21:57.639731 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:21:57.641322 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:21:57.641429 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:21:57.647356 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:21:57.648274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:21:57.652410 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:21:57.653671 ignition[998]: INFO : Ignition 2.19.0 Aug 13 00:21:57.653671 ignition[998]: INFO : Stage: umount Aug 13 00:21:57.653671 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:21:57.653671 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:21:57.657527 ignition[998]: INFO : umount: umount passed Aug 13 00:21:57.658895 ignition[998]: INFO : Ignition finished successfully Aug 13 00:21:57.660136 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:21:57.660258 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:21:57.661712 systemd[1]: Stopped target network.target - Network. Aug 13 00:21:57.662585 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:21:57.662669 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:21:57.664192 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:21:57.664232 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:21:57.665596 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:21:57.665646 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:21:57.667123 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:21:57.667169 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:21:57.669000 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:21:57.670281 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:21:57.679161 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:21:57.679318 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:21:57.681113 systemd-networkd[765]: eth0: DHCPv6 lease lost Aug 13 00:21:57.683828 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:21:57.684001 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:21:57.688664 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:21:57.688717 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:21:57.704295 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:21:57.705141 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:21:57.705220 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:21:57.706157 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:21:57.706197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:21:57.706942 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:21:57.706977 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:21:57.708628 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:21:57.708674 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:21:57.710465 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:21:57.715263 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:21:57.715355 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:21:57.719137 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:21:57.719218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:21:57.721410 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:21:57.721559 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:21:57.724184 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:21:57.724283 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:21:57.726255 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:21:57.726351 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:21:57.727324 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:21:57.727361 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:21:57.728769 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:21:57.728819 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:21:57.730850 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:21:57.730900 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:21:57.733625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:21:57.733676 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:21:57.745295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:21:57.746606 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:21:57.746676 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:21:57.748496 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:21:57.748537 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:21:57.750173 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:21:57.750215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:21:57.751990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:21:57.752031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:57.754347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:21:57.754447 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:21:57.756314 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:21:57.758095 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:21:57.768605 systemd[1]: Switching root. Aug 13 00:21:57.798007 systemd-journald[240]: Journal stopped Aug 13 00:21:58.544696 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Aug 13 00:21:58.544755 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:21:58.544768 kernel: SELinux: policy capability open_perms=1 Aug 13 00:21:58.544778 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:21:58.544789 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:21:58.544799 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:21:58.544809 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:21:58.544818 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:21:58.544828 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:21:58.544839 kernel: audit: type=1403 audit(1755044517.956:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:21:58.544855 systemd[1]: Successfully loaded SELinux policy in 34.623ms. Aug 13 00:21:58.544872 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.430ms. Aug 13 00:21:58.544885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:21:58.544897 systemd[1]: Detected virtualization kvm. Aug 13 00:21:58.544907 systemd[1]: Detected architecture arm64. Aug 13 00:21:58.544918 systemd[1]: Detected first boot. Aug 13 00:21:58.544929 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:21:58.544940 zram_generator::config[1044]: No configuration found. Aug 13 00:21:58.544953 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:21:58.544964 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:21:58.544977 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:21:58.544988 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:21:58.544999 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:21:58.545011 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:21:58.545025 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:21:58.545036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:21:58.545047 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:21:58.545060 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:21:58.545070 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:21:58.545111 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:21:58.545124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:21:58.545135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:21:58.545146 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:21:58.545157 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:21:58.545168 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:21:58.545181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:21:58.545193 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:21:58.545203 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:21:58.545214 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:21:58.545224 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:21:58.545235 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:21:58.545270 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:21:58.545288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:21:58.545300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:21:58.545313 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:21:58.545324 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:21:58.545338 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:21:58.545349 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:21:58.545359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:21:58.545370 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:21:58.545380 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:21:58.545407 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:21:58.545418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:21:58.545430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:21:58.545441 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:21:58.545460 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:21:58.545470 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:21:58.545481 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:21:58.545492 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:21:58.545503 systemd[1]: Reached target machines.target - Containers. Aug 13 00:21:58.545513 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:21:58.545525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:58.545537 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:21:58.545547 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:21:58.545558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:58.545568 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:21:58.545579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:58.545589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:21:58.545600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:58.545619 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:21:58.545630 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:21:58.545641 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:21:58.545651 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:21:58.545662 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:21:58.545673 kernel: ACPI: bus type drm_connector registered Aug 13 00:21:58.545683 kernel: fuse: init (API version 7.39) Aug 13 00:21:58.545693 kernel: loop: module loaded Aug 13 00:21:58.545703 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:21:58.545716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:21:58.545727 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:21:58.545738 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:21:58.545748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:21:58.545759 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:21:58.545771 systemd[1]: Stopped verity-setup.service. Aug 13 00:21:58.545781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:21:58.545792 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:21:58.545802 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:21:58.545814 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:21:58.545825 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:21:58.545853 systemd-journald[1119]: Collecting audit messages is disabled. Aug 13 00:21:58.545877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:21:58.545891 systemd-journald[1119]: Journal started Aug 13 00:21:58.545913 systemd-journald[1119]: Runtime Journal (/run/log/journal/560da505ede04caa8195fe28453aa077) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:21:58.337498 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:21:58.546291 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:21:58.352585 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:21:58.352970 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:21:58.548642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:21:58.550017 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:21:58.551413 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:21:58.551574 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:21:58.552925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:58.553103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:58.554381 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:21:58.554539 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:21:58.555787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:58.555939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:58.558322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:21:58.558464 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:21:58.559582 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:58.559735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:58.560975 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:21:58.562449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:21:58.563756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:21:58.577517 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:21:58.592252 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:21:58.594416 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:21:58.595384 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:21:58.595443 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:21:58.597825 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:21:58.600194 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:21:58.602427 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:21:58.603428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:58.604946 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:21:58.607647 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:21:58.608887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:21:58.610242 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:21:58.614284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:21:58.615393 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:21:58.616230 systemd-journald[1119]: Time spent on flushing to /var/log/journal/560da505ede04caa8195fe28453aa077 is 42.563ms for 854 entries. Aug 13 00:21:58.616230 systemd-journald[1119]: System Journal (/var/log/journal/560da505ede04caa8195fe28453aa077) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:21:58.665098 systemd-journald[1119]: Received client request to flush runtime journal. Aug 13 00:21:58.665147 kernel: loop0: detected capacity change from 0 to 207008 Aug 13 00:21:58.665161 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:21:58.618781 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:21:58.624292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:21:58.630144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:21:58.631809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:21:58.633448 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:21:58.635844 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:21:58.639423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:21:58.645622 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:21:58.657959 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:21:58.667304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:21:58.668948 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:21:58.672991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:21:58.679283 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Aug 13 00:21:58.679302 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Aug 13 00:21:58.680277 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:21:58.685168 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:21:58.688670 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:21:58.689294 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:21:58.703320 kernel: loop1: detected capacity change from 0 to 114432 Aug 13 00:21:58.703314 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:21:58.731838 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:21:58.742251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:21:58.747097 kernel: loop2: detected capacity change from 0 to 114328 Aug 13 00:21:58.757493 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Aug 13 00:21:58.757868 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Aug 13 00:21:58.762559 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:21:58.776121 kernel: loop3: detected capacity change from 0 to 207008 Aug 13 00:21:58.785104 kernel: loop4: detected capacity change from 0 to 114432 Aug 13 00:21:58.790159 kernel: loop5: detected capacity change from 0 to 114328 Aug 13 00:21:58.794008 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 00:21:58.794443 (sd-merge)[1185]: Merged extensions into '/usr'. Aug 13 00:21:58.799698 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:21:58.799716 systemd[1]: Reloading... Aug 13 00:21:58.846115 zram_generator::config[1211]: No configuration found. Aug 13 00:21:58.935679 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:21:58.948543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:58.984275 systemd[1]: Reloading finished in 184 ms. Aug 13 00:21:59.017842 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:21:59.019349 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:21:59.033277 systemd[1]: Starting ensure-sysext.service... Aug 13 00:21:59.034987 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:21:59.043676 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:21:59.043691 systemd[1]: Reloading... Aug 13 00:21:59.052911 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:21:59.053211 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:21:59.053841 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:21:59.054053 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 00:21:59.054121 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 00:21:59.056653 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:21:59.056665 systemd-tmpfiles[1246]: Skipping /boot Aug 13 00:21:59.063824 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:21:59.063842 systemd-tmpfiles[1246]: Skipping /boot Aug 13 00:21:59.093186 zram_generator::config[1276]: No configuration found. Aug 13 00:21:59.176215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:59.212685 systemd[1]: Reloading finished in 168 ms. Aug 13 00:21:59.230140 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:21:59.239537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:21:59.248845 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:21:59.251599 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:21:59.254035 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:21:59.258401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:21:59.270692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:21:59.275381 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:21:59.281212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:59.282432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:59.287407 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:59.296900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:59.298052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:59.301000 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:21:59.302720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:59.302919 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:59.304854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:59.306148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:59.308230 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:21:59.310029 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:59.310277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:59.322398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:59.326493 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Aug 13 00:21:59.334096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:59.337507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:59.342379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:59.343269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:59.352545 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:21:59.356291 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:21:59.360139 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:21:59.362190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:59.362358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:59.364583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:59.364734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:59.366466 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:21:59.368896 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:59.369073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:59.372802 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:21:59.378401 augenrules[1342]: No rules Aug 13 00:21:59.386263 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:21:59.389904 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:21:59.405384 systemd[1]: Finished ensure-sysext.service. Aug 13 00:21:59.407311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:59.414735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:59.418780 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:21:59.420845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:59.427110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:59.428064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:59.429949 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:21:59.432891 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:21:59.435261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:21:59.435841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:59.435998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:59.437401 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:21:59.437556 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:21:59.444368 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 13 00:21:59.447226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:59.447409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:59.450428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:21:59.453807 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:59.453964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:59.456727 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:21:59.461381 systemd-resolved[1313]: Positive Trust Anchors: Aug 13 00:21:59.465507 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:21:59.465569 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:21:59.468105 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1360) Aug 13 00:21:59.480459 systemd-resolved[1313]: Defaulting to hostname 'linux'. Aug 13 00:21:59.491290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:21:59.492303 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:21:59.535576 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:21:59.537018 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:21:59.545467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:21:59.554385 systemd-networkd[1383]: lo: Link UP Aug 13 00:21:59.554393 systemd-networkd[1383]: lo: Gained carrier Aug 13 00:21:59.560418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:21:59.560434 systemd-networkd[1383]: Enumeration completed Aug 13 00:21:59.561494 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:21:59.566407 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:59.566415 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:21:59.566744 systemd[1]: Reached target network.target - Network. Aug 13 00:21:59.567634 systemd-networkd[1383]: eth0: Link UP Aug 13 00:21:59.567641 systemd-networkd[1383]: eth0: Gained carrier Aug 13 00:21:59.567657 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:59.569592 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:21:59.574152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:59.582760 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:21:59.587148 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:21:59.588147 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:21:59.589028 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Aug 13 00:21:59.589674 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:21:59.589726 systemd-timesyncd[1384]: Initial clock synchronization to Wed 2025-08-13 00:21:59.387476 UTC. Aug 13 00:21:59.589736 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:21:59.621986 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:21:59.624219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:59.655822 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:21:59.657167 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:21:59.658009 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:21:59.660302 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:21:59.661217 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:21:59.662284 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:21:59.663305 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:21:59.664382 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:21:59.665416 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:21:59.665466 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:21:59.666218 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:21:59.668134 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:21:59.670546 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:21:59.679274 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:21:59.681578 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:21:59.683097 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:21:59.684003 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:21:59.684838 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:21:59.685566 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:21:59.685599 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:21:59.686559 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:21:59.688305 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:21:59.691235 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:21:59.692248 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:21:59.694295 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:21:59.695228 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:21:59.698283 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:21:59.702530 jq[1417]: false Aug 13 00:21:59.703472 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:21:59.707528 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:21:59.711040 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:21:59.717309 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:21:59.725220 extend-filesystems[1418]: Found loop3 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found loop4 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found loop5 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda1 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda2 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda3 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found usr Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda4 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda6 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda7 Aug 13 00:21:59.725220 extend-filesystems[1418]: Found vda9 Aug 13 00:21:59.725220 extend-filesystems[1418]: Checking size of /dev/vda9 Aug 13 00:21:59.727293 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:21:59.727811 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:21:59.730300 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:21:59.737229 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:21:59.743161 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:21:59.747906 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:21:59.748444 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:21:59.749490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:21:59.750517 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:21:59.753531 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:21:59.753724 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:21:59.753885 dbus-daemon[1416]: [system] SELinux support is enabled Aug 13 00:21:59.755399 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:21:59.768170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382) Aug 13 00:21:59.777048 jq[1435]: true Aug 13 00:21:59.795541 extend-filesystems[1418]: Resized partition /dev/vda9 Aug 13 00:21:59.805253 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:21:59.807404 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:21:59.811145 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:21:59.811598 jq[1451]: true Aug 13 00:21:59.811714 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:21:59.811747 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:21:59.812978 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:21:59.812995 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:21:59.813708 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:21:59.819431 systemd-logind[1425]: New seat seat0. Aug 13 00:21:59.822859 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:21:59.833663 update_engine[1428]: I20250813 00:21:59.833302 1428 main.cc:92] Flatcar Update Engine starting Aug 13 00:21:59.837652 tar[1438]: linux-arm64/LICENSE Aug 13 00:21:59.839094 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:21:59.842821 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:21:59.844847 update_engine[1428]: I20250813 00:21:59.844784 1428 update_check_scheduler.cc:74] Next update check in 5m29s Aug 13 00:21:59.852284 tar[1438]: linux-arm64/helm Aug 13 00:21:59.852538 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:21:59.853449 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:21:59.853449 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:21:59.853449 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:21:59.861158 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Aug 13 00:21:59.855658 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:21:59.855863 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:21:59.906796 bash[1472]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:21:59.910402 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:21:59.912813 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:21:59.958281 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:22:00.106742 containerd[1443]: time="2025-08-13T00:22:00.106558097Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:22:00.132200 containerd[1443]: time="2025-08-13T00:22:00.131797547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:22:00.134672 containerd[1443]: time="2025-08-13T00:22:00.134500808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:22:00.134672 containerd[1443]: time="2025-08-13T00:22:00.134548844Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:22:00.134672 containerd[1443]: time="2025-08-13T00:22:00.134571107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:22:00.134750 containerd[1443]: time="2025-08-13T00:22:00.134730613Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:22:00.134770 containerd[1443]: time="2025-08-13T00:22:00.134747222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:22:00.134808 containerd[1443]: time="2025-08-13T00:22:00.134798455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:22:00.134828 containerd[1443]: time="2025-08-13T00:22:00.134810307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:22:00.134992 containerd[1443]: time="2025-08-13T00:22:00.134968683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:22:00.135016 containerd[1443]: time="2025-08-13T00:22:00.134993246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:22:00.135016 containerd[1443]: time="2025-08-13T00:22:00.135009895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:22:00.135054 containerd[1443]: time="2025-08-13T00:22:00.135020422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:22:00.135148 containerd[1443]: time="2025-08-13T00:22:00.135132127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:22:00.135381 containerd[1443]: time="2025-08-13T00:22:00.135350507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:22:00.135481 containerd[1443]: time="2025-08-13T00:22:00.135464318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:22:00.135502 containerd[1443]: time="2025-08-13T00:22:00.135484398Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:22:00.135566 containerd[1443]: time="2025-08-13T00:22:00.135553058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:22:00.135608 containerd[1443]: time="2025-08-13T00:22:00.135596181Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:22:00.142420 containerd[1443]: time="2025-08-13T00:22:00.142114023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:22:00.142420 containerd[1443]: time="2025-08-13T00:22:00.142253489Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:22:00.142420 containerd[1443]: time="2025-08-13T00:22:00.142283277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:22:00.142420 containerd[1443]: time="2025-08-13T00:22:00.142394943Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:22:00.142420 containerd[1443]: time="2025-08-13T00:22:00.142416075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:22:00.142583 containerd[1443]: time="2025-08-13T00:22:00.142560882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:22:00.143099 containerd[1443]: time="2025-08-13T00:22:00.143061469Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:22:00.143225 containerd[1443]: time="2025-08-13T00:22:00.143200272Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:22:00.143248 containerd[1443]: time="2025-08-13T00:22:00.143226707Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:22:00.143248 containerd[1443]: time="2025-08-13T00:22:00.143242068Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:22:00.143282 containerd[1443]: time="2025-08-13T00:22:00.143256027Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143282 containerd[1443]: time="2025-08-13T00:22:00.143269166Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143315 containerd[1443]: time="2025-08-13T00:22:00.143300241Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143341 containerd[1443]: time="2025-08-13T00:22:00.143315408Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143341 containerd[1443]: time="2025-08-13T00:22:00.143330146Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143375 containerd[1443]: time="2025-08-13T00:22:00.143342622Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143375 containerd[1443]: time="2025-08-13T00:22:00.143355645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143375 containerd[1443]: time="2025-08-13T00:22:00.143367966Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:22:00.143422 containerd[1443]: time="2025-08-13T00:22:00.143389566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143422 containerd[1443]: time="2025-08-13T00:22:00.143403524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143422 containerd[1443]: time="2025-08-13T00:22:00.143415533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143478 containerd[1443]: time="2025-08-13T00:22:00.143427308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143478 containerd[1443]: time="2025-08-13T00:22:00.143439667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143478 containerd[1443]: time="2025-08-13T00:22:00.143452768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143478 containerd[1443]: time="2025-08-13T00:22:00.143464387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143478 containerd[1443]: time="2025-08-13T00:22:00.143476863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143561 containerd[1443]: time="2025-08-13T00:22:00.143489964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143561 containerd[1443]: time="2025-08-13T00:22:00.143503688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143561 containerd[1443]: time="2025-08-13T00:22:00.143514878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143561 containerd[1443]: time="2025-08-13T00:22:00.143528135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143561 containerd[1443]: time="2025-08-13T00:22:00.143540611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143561 containerd[1443]: time="2025-08-13T00:22:00.143555934Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:22:00.143666 containerd[1443]: time="2025-08-13T00:22:00.143576755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143666 containerd[1443]: time="2025-08-13T00:22:00.143589270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.143666 containerd[1443]: time="2025-08-13T00:22:00.143600031Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:22:00.143973 containerd[1443]: time="2025-08-13T00:22:00.143944075Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:22:00.144370 containerd[1443]: time="2025-08-13T00:22:00.144338766Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:22:00.144395 containerd[1443]: time="2025-08-13T00:22:00.144367385Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:22:00.144426 containerd[1443]: time="2025-08-13T00:22:00.144411365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:22:00.144494 containerd[1443]: time="2025-08-13T00:22:00.144468211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.144514 containerd[1443]: time="2025-08-13T00:22:00.144507006Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:22:00.144533 containerd[1443]: time="2025-08-13T00:22:00.144518820Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:22:00.144558 containerd[1443]: time="2025-08-13T00:22:00.144538471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:22:00.145294 containerd[1443]: time="2025-08-13T00:22:00.145217785Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:22:00.145395 containerd[1443]: time="2025-08-13T00:22:00.145305122Z" level=info msg="Connect containerd service" Aug 13 00:22:00.145395 containerd[1443]: time="2025-08-13T00:22:00.145335846Z" level=info msg="using legacy CRI server" Aug 13 00:22:00.145395 containerd[1443]: time="2025-08-13T00:22:00.145343371Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:22:00.145456 containerd[1443]: time="2025-08-13T00:22:00.145426496Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:22:00.146492 containerd[1443]: time="2025-08-13T00:22:00.146458744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:22:00.146865 containerd[1443]: time="2025-08-13T00:22:00.146826611Z" level=info msg="Start subscribing containerd event" Aug 13 00:22:00.146901 containerd[1443]: time="2025-08-13T00:22:00.146889501Z" level=info msg="Start recovering state" Aug 13 00:22:00.147384 containerd[1443]: time="2025-08-13T00:22:00.147355465Z" level=info msg="Start event monitor" Aug 13 00:22:00.147419 containerd[1443]: time="2025-08-13T00:22:00.147389542Z" level=info msg="Start snapshots syncer" Aug 13 00:22:00.147419 containerd[1443]: time="2025-08-13T00:22:00.147401044Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:22:00.147419 containerd[1443]: time="2025-08-13T00:22:00.147409309Z" level=info msg="Start streaming server" Aug 13 00:22:00.148471 containerd[1443]: time="2025-08-13T00:22:00.148448732Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:22:00.148515 containerd[1443]: time="2025-08-13T00:22:00.148503746Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:22:00.148568 containerd[1443]: time="2025-08-13T00:22:00.148555407Z" level=info msg="containerd successfully booted in 0.042816s" Aug 13 00:22:00.148651 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:22:00.237274 tar[1438]: linux-arm64/README.md Aug 13 00:22:00.250601 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:22:00.258591 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:22:00.277998 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:22:00.293647 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:22:00.298835 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:22:00.299090 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:22:00.302150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:22:00.314288 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:22:00.317212 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:22:00.319516 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:22:00.321009 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:22:00.699247 systemd-networkd[1383]: eth0: Gained IPv6LL Aug 13 00:22:00.701698 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:22:00.703305 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:22:00.711391 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 00:22:00.713830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:00.715783 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:22:00.737412 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:22:00.738950 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 00:22:00.739151 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 00:22:00.741196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:22:01.279491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:01.280864 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:22:01.282553 systemd[1]: Startup finished in 585ms (kernel) + 5.235s (initrd) + 3.369s (userspace) = 9.190s. Aug 13 00:22:01.283756 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:22:01.732393 kubelet[1529]: E0813 00:22:01.732277 1529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:22:01.734717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:22:01.734875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:22:05.561105 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:22:05.573328 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:52178.service - OpenSSH per-connection server daemon (10.0.0.1:52178). Aug 13 00:22:05.654741 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 52178 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:22:05.656915 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:05.668978 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:22:05.679373 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:22:05.683885 systemd-logind[1425]: New session 1 of user core. Aug 13 00:22:05.696824 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:22:05.705498 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:22:05.708451 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:22:05.799511 systemd[1547]: Queued start job for default target default.target. Aug 13 00:22:05.808182 systemd[1547]: Created slice app.slice - User Application Slice. Aug 13 00:22:05.808214 systemd[1547]: Reached target paths.target - Paths. Aug 13 00:22:05.808227 systemd[1547]: Reached target timers.target - Timers. Aug 13 00:22:05.809701 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:22:05.821599 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:22:05.821723 systemd[1547]: Reached target sockets.target - Sockets. Aug 13 00:22:05.821736 systemd[1547]: Reached target basic.target - Basic System. Aug 13 00:22:05.821794 systemd[1547]: Reached target default.target - Main User Target. Aug 13 00:22:05.822068 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:22:05.823103 systemd[1547]: Startup finished in 105ms. Aug 13 00:22:05.823617 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:22:05.891576 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:52182.service - OpenSSH per-connection server daemon (10.0.0.1:52182). Aug 13 00:22:05.937193 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 52182 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:22:05.938566 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:05.943212 systemd-logind[1425]: New session 2 of user core. Aug 13 00:22:05.957267 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:22:06.010372 sshd[1558]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:06.025733 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:52182.service: Deactivated successfully. Aug 13 00:22:06.028680 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:22:06.030043 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:22:06.031389 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:52198.service - OpenSSH per-connection server daemon (10.0.0.1:52198). Aug 13 00:22:06.032671 systemd-logind[1425]: Removed session 2. Aug 13 00:22:06.068933 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 52198 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:22:06.070362 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:06.074405 systemd-logind[1425]: New session 3 of user core. Aug 13 00:22:06.085283 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:22:06.134356 sshd[1565]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:06.154026 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:52198.service: Deactivated successfully. Aug 13 00:22:06.156260 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:22:06.157969 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:22:06.160231 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:52204.service - OpenSSH per-connection server daemon (10.0.0.1:52204). Aug 13 00:22:06.161293 systemd-logind[1425]: Removed session 3. Aug 13 00:22:06.200396 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 52204 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:22:06.201972 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:06.206168 systemd-logind[1425]: New session 4 of user core. Aug 13 00:22:06.219293 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:22:06.272382 sshd[1572]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:06.287238 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:52204.service: Deactivated successfully. Aug 13 00:22:06.289060 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:22:06.290451 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:22:06.291813 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:52212.service - OpenSSH per-connection server daemon (10.0.0.1:52212). Aug 13 00:22:06.292875 systemd-logind[1425]: Removed session 4. Aug 13 00:22:06.329158 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 52212 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:22:06.330730 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:06.335515 systemd-logind[1425]: New session 5 of user core. Aug 13 00:22:06.346284 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:22:06.417165 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:22:06.417461 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:22:06.430186 sudo[1582]: pam_unix(sudo:session): session closed for user root Aug 13 00:22:06.432100 sshd[1579]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:06.446952 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:52212.service: Deactivated successfully. Aug 13 00:22:06.450840 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:22:06.452294 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:22:06.453741 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:52224.service - OpenSSH per-connection server daemon (10.0.0.1:52224). Aug 13 00:22:06.454634 systemd-logind[1425]: Removed session 5. Aug 13 00:22:06.492326 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 52224 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:22:06.493789 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:06.497414 systemd-logind[1425]: New session 6 of user core. Aug 13 00:22:06.509363 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:22:06.561945 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:22:06.562275 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:22:06.565810 sudo[1591]: pam_unix(sudo:session): session closed for user root Aug 13 00:22:06.572414 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:22:06.572702 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:22:06.596422 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:22:06.597881 auditctl[1594]: No rules Aug 13 00:22:06.599047 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:22:06.599306 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:22:06.601926 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:22:06.632948 augenrules[1612]: No rules Aug 13 00:22:06.635204 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:22:06.637493 sudo[1590]: pam_unix(sudo:session): session closed for user root Aug 13 00:22:06.642033 sshd[1587]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:06.653938 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:52224.service: Deactivated successfully. Aug 13 00:22:06.656510 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:22:06.660155 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:22:06.679515 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:52238.service - OpenSSH per-connection server daemon (10.0.0.1:52238). Aug 13 00:22:06.680281 systemd-logind[1425]: Removed session 6. Aug 13 00:22:06.716408 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 52238 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:22:06.717822 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:06.721942 systemd-logind[1425]: New session 7 of user core. Aug 13 00:22:06.732285 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:22:06.783540 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:22:06.783833 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:22:07.122406 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:22:07.122518 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:22:07.392511 dockerd[1641]: time="2025-08-13T00:22:07.392372324Z" level=info msg="Starting up" Aug 13 00:22:07.565607 dockerd[1641]: time="2025-08-13T00:22:07.565538885Z" level=info msg="Loading containers: start." Aug 13 00:22:07.684189 kernel: Initializing XFRM netlink socket Aug 13 00:22:07.757206 systemd-networkd[1383]: docker0: Link UP Aug 13 00:22:07.778815 dockerd[1641]: time="2025-08-13T00:22:07.778705586Z" level=info msg="Loading containers: done." Aug 13 00:22:07.793042 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck60561533-merged.mount: Deactivated successfully. Aug 13 00:22:07.794656 dockerd[1641]: time="2025-08-13T00:22:07.794598428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:22:07.794739 dockerd[1641]: time="2025-08-13T00:22:07.794711614Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:22:07.794850 dockerd[1641]: time="2025-08-13T00:22:07.794823968Z" level=info msg="Daemon has completed initialization" Aug 13 00:22:07.851787 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:22:07.852331 dockerd[1641]: time="2025-08-13T00:22:07.851681565Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:22:08.451187 containerd[1443]: time="2025-08-13T00:22:08.451147253Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:22:09.067552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922374336.mount: Deactivated successfully. Aug 13 00:22:09.898163 containerd[1443]: time="2025-08-13T00:22:09.897642639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:09.898163 containerd[1443]: time="2025-08-13T00:22:09.898105497Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327783" Aug 13 00:22:09.899106 containerd[1443]: time="2025-08-13T00:22:09.899059238Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:09.902147 containerd[1443]: time="2025-08-13T00:22:09.902094607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:09.904341 containerd[1443]: time="2025-08-13T00:22:09.904061872Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.452870116s" Aug 13 00:22:09.904341 containerd[1443]: time="2025-08-13T00:22:09.904126021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 13 00:22:09.905514 containerd[1443]: time="2025-08-13T00:22:09.905480256Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:22:10.873900 containerd[1443]: time="2025-08-13T00:22:10.873836114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:10.875090 containerd[1443]: time="2025-08-13T00:22:10.874879057Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529698" Aug 13 00:22:10.876163 containerd[1443]: time="2025-08-13T00:22:10.876129610Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:10.879448 containerd[1443]: time="2025-08-13T00:22:10.879406011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:10.880473 containerd[1443]: time="2025-08-13T00:22:10.880425749Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 974.900147ms" Aug 13 00:22:10.880525 containerd[1443]: time="2025-08-13T00:22:10.880473072Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 13 00:22:10.880935 containerd[1443]: time="2025-08-13T00:22:10.880891791Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:22:11.895146 containerd[1443]: time="2025-08-13T00:22:11.895088060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.896437 containerd[1443]: time="2025-08-13T00:22:11.896396845Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484140" Aug 13 00:22:11.897129 containerd[1443]: time="2025-08-13T00:22:11.897101722Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.900688 containerd[1443]: time="2025-08-13T00:22:11.900639470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.901955 containerd[1443]: time="2025-08-13T00:22:11.901893296Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.020962675s" Aug 13 00:22:11.901955 containerd[1443]: time="2025-08-13T00:22:11.901934097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 13 00:22:11.902665 containerd[1443]: time="2025-08-13T00:22:11.902423753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:22:11.953445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:22:11.971303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:12.075820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:12.080287 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:22:12.124057 kubelet[1858]: E0813 00:22:12.123996 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:22:12.127063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:22:12.127221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:22:12.914757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540405731.mount: Deactivated successfully. Aug 13 00:22:13.285531 containerd[1443]: time="2025-08-13T00:22:13.285477532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:13.286152 containerd[1443]: time="2025-08-13T00:22:13.286123364Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378407" Aug 13 00:22:13.286722 containerd[1443]: time="2025-08-13T00:22:13.286690946Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:13.288524 containerd[1443]: time="2025-08-13T00:22:13.288474171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:13.289445 containerd[1443]: time="2025-08-13T00:22:13.289405765Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.386953007s" Aug 13 00:22:13.289488 containerd[1443]: time="2025-08-13T00:22:13.289444512Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 13 00:22:13.290007 containerd[1443]: time="2025-08-13T00:22:13.289985373Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:22:13.754034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017025511.mount: Deactivated successfully. Aug 13 00:22:14.452017 containerd[1443]: time="2025-08-13T00:22:14.451954915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:14.452689 containerd[1443]: time="2025-08-13T00:22:14.452655814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 13 00:22:14.454165 containerd[1443]: time="2025-08-13T00:22:14.454126383Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:14.457178 containerd[1443]: time="2025-08-13T00:22:14.457144420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:14.458530 containerd[1443]: time="2025-08-13T00:22:14.458380784Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.168362592s" Aug 13 00:22:14.458530 containerd[1443]: time="2025-08-13T00:22:14.458417999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:22:14.459174 containerd[1443]: time="2025-08-13T00:22:14.459112842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:22:14.901918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476147925.mount: Deactivated successfully. Aug 13 00:22:14.906543 containerd[1443]: time="2025-08-13T00:22:14.906509597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:14.907689 containerd[1443]: time="2025-08-13T00:22:14.907464622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 13 00:22:14.908279 containerd[1443]: time="2025-08-13T00:22:14.908248835Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:14.911122 containerd[1443]: time="2025-08-13T00:22:14.911087215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:14.911968 containerd[1443]: time="2025-08-13T00:22:14.911885812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.743565ms" Aug 13 00:22:14.911968 containerd[1443]: time="2025-08-13T00:22:14.911919958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:22:14.912425 containerd[1443]: time="2025-08-13T00:22:14.912402432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:22:15.425387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076310112.mount: Deactivated successfully. Aug 13 00:22:17.145941 containerd[1443]: time="2025-08-13T00:22:17.145887621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:17.147761 containerd[1443]: time="2025-08-13T00:22:17.147728683Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Aug 13 00:22:17.151213 containerd[1443]: time="2025-08-13T00:22:17.151134053Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:17.308157 containerd[1443]: time="2025-08-13T00:22:17.308055299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:17.309670 containerd[1443]: time="2025-08-13T00:22:17.309460462Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.397022753s" Aug 13 00:22:17.309670 containerd[1443]: time="2025-08-13T00:22:17.309509614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 13 00:22:22.203467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:22:22.213298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:22.340128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:22.344071 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:22:22.377440 kubelet[2017]: E0813 00:22:22.377380 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:22:22.380102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:22:22.380362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:22:22.582503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:22.591310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:22.617738 systemd[1]: Reloading requested from client PID 2033 ('systemctl') (unit session-7.scope)... Aug 13 00:22:22.617750 systemd[1]: Reloading... Aug 13 00:22:22.682111 zram_generator::config[2072]: No configuration found. Aug 13 00:22:22.922699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:22:22.976233 systemd[1]: Reloading finished in 358 ms. Aug 13 00:22:23.024141 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:22:23.024208 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:22:23.025178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:23.028489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:23.130754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:23.134549 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:22:23.168418 kubelet[2118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:23.168418 kubelet[2118]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:22:23.168418 kubelet[2118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:23.168778 kubelet[2118]: I0813 00:22:23.168501 2118 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:22:24.316425 kubelet[2118]: I0813 00:22:24.316374 2118 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:22:24.316425 kubelet[2118]: I0813 00:22:24.316411 2118 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:22:24.316788 kubelet[2118]: I0813 00:22:24.316690 2118 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:22:24.358125 kubelet[2118]: E0813 00:22:24.358067 2118 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:24.359840 kubelet[2118]: I0813 00:22:24.359716 2118 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:22:24.368113 kubelet[2118]: E0813 00:22:24.368071 2118 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:22:24.368113 kubelet[2118]: I0813 00:22:24.368112 2118 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:22:24.373108 kubelet[2118]: I0813 00:22:24.371830 2118 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:22:24.373108 kubelet[2118]: I0813 00:22:24.372067 2118 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:22:24.373108 kubelet[2118]: I0813 00:22:24.372114 2118 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:22:24.373108 kubelet[2118]: I0813 00:22:24.372534 2118 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:22:24.374396 kubelet[2118]: I0813 00:22:24.372544 2118 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:22:24.374396 kubelet[2118]: I0813 00:22:24.372866 2118 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:24.380524 kubelet[2118]: I0813 00:22:24.380490 2118 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:22:24.383152 kubelet[2118]: I0813 00:22:24.380537 2118 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:22:24.383152 kubelet[2118]: I0813 00:22:24.380564 2118 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:22:24.383152 kubelet[2118]: I0813 00:22:24.380575 2118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:22:24.383705 kubelet[2118]: I0813 00:22:24.383672 2118 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:22:24.384451 kubelet[2118]: I0813 00:22:24.384427 2118 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:22:24.384623 kubelet[2118]: W0813 00:22:24.384469 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:24.384655 kubelet[2118]: E0813 00:22:24.384630 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:24.384715 kubelet[2118]: W0813 00:22:24.384673 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:24.384749 kubelet[2118]: E0813 00:22:24.384723 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:24.384792 kubelet[2118]: W0813 00:22:24.384779 2118 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:22:24.385797 kubelet[2118]: I0813 00:22:24.385770 2118 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:22:24.385934 kubelet[2118]: I0813 00:22:24.385920 2118 server.go:1287] "Started kubelet" Aug 13 00:22:24.386218 kubelet[2118]: I0813 00:22:24.386174 2118 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:22:24.386454 kubelet[2118]: I0813 00:22:24.386402 2118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:22:24.386808 kubelet[2118]: I0813 00:22:24.386784 2118 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:22:24.387252 kubelet[2118]: I0813 00:22:24.387201 2118 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:22:24.388598 kubelet[2118]: I0813 00:22:24.388571 2118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:22:24.390407 kubelet[2118]: I0813 00:22:24.390373 2118 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:22:24.391947 kubelet[2118]: E0813 00:22:24.391909 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:24.391947 kubelet[2118]: I0813 00:22:24.391951 2118 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:22:24.392142 kubelet[2118]: I0813 00:22:24.392120 2118 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:22:24.392195 kubelet[2118]: I0813 00:22:24.392179 2118 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:22:24.392260 kubelet[2118]: E0813 00:22:24.392211 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Aug 13 00:22:24.392575 kubelet[2118]: W0813 00:22:24.392530 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:24.392675 kubelet[2118]: E0813 00:22:24.392581 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:24.395604 kubelet[2118]: E0813 00:22:24.395577 2118 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:22:24.395840 kubelet[2118]: I0813 00:22:24.395707 2118 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:22:24.395904 kubelet[2118]: E0813 00:22:24.395210 2118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2bb6e91b9042 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:22:24.38588013 +0000 UTC m=+1.248414441,LastTimestamp:2025-08-13 00:22:24.38588013 +0000 UTC m=+1.248414441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:22:24.396166 kubelet[2118]: I0813 00:22:24.396033 2118 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:22:24.397458 kubelet[2118]: I0813 00:22:24.397426 2118 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:22:24.410020 kubelet[2118]: I0813 00:22:24.409980 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:22:24.411402 kubelet[2118]: I0813 00:22:24.410972 2118 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:22:24.411402 kubelet[2118]: I0813 00:22:24.410994 2118 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:22:24.411402 kubelet[2118]: I0813 00:22:24.411012 2118 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:24.414698 kubelet[2118]: I0813 00:22:24.414434 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:22:24.414698 kubelet[2118]: I0813 00:22:24.414478 2118 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:22:24.414698 kubelet[2118]: I0813 00:22:24.414510 2118 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:22:24.414698 kubelet[2118]: I0813 00:22:24.414517 2118 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:22:24.414698 kubelet[2118]: E0813 00:22:24.414575 2118 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:22:24.415103 kubelet[2118]: W0813 00:22:24.415026 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:24.415103 kubelet[2118]: E0813 00:22:24.415066 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:24.492614 kubelet[2118]: E0813 00:22:24.492579 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:24.514695 kubelet[2118]: E0813 00:22:24.514660 2118 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:22:24.580708 kubelet[2118]: I0813 00:22:24.580577 2118 policy_none.go:49] "None policy: Start" Aug 13 00:22:24.580708 kubelet[2118]: I0813 00:22:24.580613 2118 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:22:24.580708 kubelet[2118]: I0813 00:22:24.580640 2118 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:22:24.586057 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:22:24.592768 kubelet[2118]: E0813 00:22:24.592727 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:24.593095 kubelet[2118]: E0813 00:22:24.593054 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Aug 13 00:22:24.597948 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:22:24.601380 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:22:24.611890 kubelet[2118]: I0813 00:22:24.611854 2118 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:22:24.612486 kubelet[2118]: I0813 00:22:24.612182 2118 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:22:24.612486 kubelet[2118]: I0813 00:22:24.612200 2118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:22:24.612486 kubelet[2118]: I0813 00:22:24.612426 2118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:22:24.613375 kubelet[2118]: E0813 00:22:24.613347 2118 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:22:24.613444 kubelet[2118]: E0813 00:22:24.613392 2118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:22:24.713875 kubelet[2118]: I0813 00:22:24.713841 2118 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:22:24.714316 kubelet[2118]: E0813 00:22:24.714287 2118 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Aug 13 00:22:24.722510 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 13 00:22:24.730065 kubelet[2118]: E0813 00:22:24.729864 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:24.732483 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 13 00:22:24.740165 kubelet[2118]: E0813 00:22:24.740131 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:24.742478 systemd[1]: Created slice kubepods-burstable-pod3f01b8099cd47e82d47d8a9334afdd21.slice - libcontainer container kubepods-burstable-pod3f01b8099cd47e82d47d8a9334afdd21.slice. Aug 13 00:22:24.743925 kubelet[2118]: E0813 00:22:24.743882 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:24.794156 kubelet[2118]: I0813 00:22:24.794113 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:24.794156 kubelet[2118]: I0813 00:22:24.794154 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:24.794263 kubelet[2118]: I0813 00:22:24.794175 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f01b8099cd47e82d47d8a9334afdd21-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f01b8099cd47e82d47d8a9334afdd21\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:24.794263 kubelet[2118]: I0813 00:22:24.794193 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:24.794263 kubelet[2118]: I0813 00:22:24.794220 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:24.794333 kubelet[2118]: I0813 00:22:24.794285 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:24.794374 kubelet[2118]: I0813 00:22:24.794346 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:22:24.794411 kubelet[2118]: I0813 00:22:24.794394 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f01b8099cd47e82d47d8a9334afdd21-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f01b8099cd47e82d47d8a9334afdd21\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:24.794437 kubelet[2118]: I0813 00:22:24.794416 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f01b8099cd47e82d47d8a9334afdd21-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f01b8099cd47e82d47d8a9334afdd21\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:24.916469 kubelet[2118]: I0813 00:22:24.916365 2118 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:22:24.917204 kubelet[2118]: E0813 00:22:24.916722 2118 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Aug 13 00:22:24.994384 kubelet[2118]: E0813 00:22:24.994290 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Aug 13 00:22:25.030700 kubelet[2118]: E0813 00:22:25.030646 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:25.031352 containerd[1443]: time="2025-08-13T00:22:25.031301801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:25.040665 kubelet[2118]: E0813 00:22:25.040580 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:25.043099 containerd[1443]: time="2025-08-13T00:22:25.043053454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:25.044379 kubelet[2118]: E0813 00:22:25.044346 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:25.045044 containerd[1443]: time="2025-08-13T00:22:25.044854237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f01b8099cd47e82d47d8a9334afdd21,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:25.318540 kubelet[2118]: I0813 00:22:25.318500 2118 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:22:25.318860 kubelet[2118]: E0813 00:22:25.318816 2118 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Aug 13 00:22:25.340447 kubelet[2118]: W0813 00:22:25.340400 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:25.340447 kubelet[2118]: E0813 00:22:25.340441 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:25.560852 kubelet[2118]: W0813 00:22:25.560780 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:25.560852 kubelet[2118]: E0813 00:22:25.560849 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:25.617240 kubelet[2118]: W0813 00:22:25.617095 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:25.617240 kubelet[2118]: E0813 00:22:25.617160 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:25.619260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982311615.mount: Deactivated successfully. Aug 13 00:22:25.629397 containerd[1443]: time="2025-08-13T00:22:25.629324457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:25.629996 containerd[1443]: time="2025-08-13T00:22:25.629818214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 13 00:22:25.630460 containerd[1443]: time="2025-08-13T00:22:25.630417636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:25.631300 containerd[1443]: time="2025-08-13T00:22:25.631270710Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:25.631765 containerd[1443]: time="2025-08-13T00:22:25.631731256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:22:25.632668 containerd[1443]: time="2025-08-13T00:22:25.632643038Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:25.632938 containerd[1443]: time="2025-08-13T00:22:25.632911437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:22:25.635048 containerd[1443]: time="2025-08-13T00:22:25.634996566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:25.638550 containerd[1443]: time="2025-08-13T00:22:25.638102458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 593.180362ms" Aug 13 00:22:25.639481 containerd[1443]: time="2025-08-13T00:22:25.639437220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.20133ms" Aug 13 00:22:25.642083 containerd[1443]: time="2025-08-13T00:22:25.642038405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.657076ms" Aug 13 00:22:25.785009 kubelet[2118]: W0813 00:22:25.784939 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Aug 13 00:22:25.785009 kubelet[2118]: E0813 00:22:25.785012 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:25.800466 kubelet[2118]: E0813 00:22:25.795788 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Aug 13 00:22:25.814189 containerd[1443]: time="2025-08-13T00:22:25.814059171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:25.814189 containerd[1443]: time="2025-08-13T00:22:25.814146893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:25.814189 containerd[1443]: time="2025-08-13T00:22:25.814158962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:25.814597 containerd[1443]: time="2025-08-13T00:22:25.814506690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:25.814768 containerd[1443]: time="2025-08-13T00:22:25.814676417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:25.814768 containerd[1443]: time="2025-08-13T00:22:25.814708189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:25.815564 containerd[1443]: time="2025-08-13T00:22:25.814982143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:25.815564 containerd[1443]: time="2025-08-13T00:22:25.815005802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:25.815564 containerd[1443]: time="2025-08-13T00:22:25.815106351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:25.815835 containerd[1443]: time="2025-08-13T00:22:25.815676120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:25.815835 containerd[1443]: time="2025-08-13T00:22:25.815700938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:25.815835 containerd[1443]: time="2025-08-13T00:22:25.815779427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:25.837282 systemd[1]: Started cri-containerd-5e133e31b23d9e0b58388e19c59e90c7281dc21ec6be46e010e79c65c906d00d.scope - libcontainer container 5e133e31b23d9e0b58388e19c59e90c7281dc21ec6be46e010e79c65c906d00d. Aug 13 00:22:25.841091 systemd[1]: Started cri-containerd-a4835c47bc7ee0fea2a5558a76a0b3416c3be3d01628d54baba22926af577289.scope - libcontainer container a4835c47bc7ee0fea2a5558a76a0b3416c3be3d01628d54baba22926af577289. Aug 13 00:22:25.842663 systemd[1]: Started cri-containerd-e697b883effca38ca82ab5921d994e80f4bdcb98a10f3f35d2ff37aaa310eb64.scope - libcontainer container e697b883effca38ca82ab5921d994e80f4bdcb98a10f3f35d2ff37aaa310eb64. Aug 13 00:22:25.873599 containerd[1443]: time="2025-08-13T00:22:25.872607622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e133e31b23d9e0b58388e19c59e90c7281dc21ec6be46e010e79c65c906d00d\"" Aug 13 00:22:25.875478 kubelet[2118]: E0813 00:22:25.875437 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:25.877715 containerd[1443]: time="2025-08-13T00:22:25.877679790Z" level=info msg="CreateContainer within sandbox \"5e133e31b23d9e0b58388e19c59e90c7281dc21ec6be46e010e79c65c906d00d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:22:25.884255 containerd[1443]: time="2025-08-13T00:22:25.884143309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4835c47bc7ee0fea2a5558a76a0b3416c3be3d01628d54baba22926af577289\"" Aug 13 00:22:25.884255 containerd[1443]: time="2025-08-13T00:22:25.884167047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f01b8099cd47e82d47d8a9334afdd21,Namespace:kube-system,Attempt:0,} returns sandbox id \"e697b883effca38ca82ab5921d994e80f4bdcb98a10f3f35d2ff37aaa310eb64\"" Aug 13 00:22:25.884853 kubelet[2118]: E0813 00:22:25.884827 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:25.885143 kubelet[2118]: E0813 00:22:25.884908 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:25.886940 containerd[1443]: time="2025-08-13T00:22:25.886899395Z" level=info msg="CreateContainer within sandbox \"e697b883effca38ca82ab5921d994e80f4bdcb98a10f3f35d2ff37aaa310eb64\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:22:25.887200 containerd[1443]: time="2025-08-13T00:22:25.886917339Z" level=info msg="CreateContainer within sandbox \"a4835c47bc7ee0fea2a5558a76a0b3416c3be3d01628d54baba22926af577289\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:22:25.893311 containerd[1443]: time="2025-08-13T00:22:25.893264562Z" level=info msg="CreateContainer within sandbox \"5e133e31b23d9e0b58388e19c59e90c7281dc21ec6be46e010e79c65c906d00d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9bed0b7b6e4b482cba9df9fb2a4f051aef4de72cc6c9dd384dca4a9daf7932aa\"" Aug 13 00:22:25.893889 containerd[1443]: time="2025-08-13T00:22:25.893857350Z" level=info msg="StartContainer for \"9bed0b7b6e4b482cba9df9fb2a4f051aef4de72cc6c9dd384dca4a9daf7932aa\"" Aug 13 00:22:25.902184 containerd[1443]: time="2025-08-13T00:22:25.902139397Z" level=info msg="CreateContainer within sandbox \"a4835c47bc7ee0fea2a5558a76a0b3416c3be3d01628d54baba22926af577289\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54a8d520aa5e8cf7f6879accc5579b197406d5f519c2b50db2d872d85902844d\"" Aug 13 00:22:25.902829 containerd[1443]: time="2025-08-13T00:22:25.902805239Z" level=info msg="StartContainer for \"54a8d520aa5e8cf7f6879accc5579b197406d5f519c2b50db2d872d85902844d\"" Aug 13 00:22:25.905787 containerd[1443]: time="2025-08-13T00:22:25.905749636Z" level=info msg="CreateContainer within sandbox \"e697b883effca38ca82ab5921d994e80f4bdcb98a10f3f35d2ff37aaa310eb64\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa508a94602c99c5f12b8a20a5eea12ecd19700e86a7d6f829f8e9d2a5b80389\"" Aug 13 00:22:25.906310 containerd[1443]: time="2025-08-13T00:22:25.906281159Z" level=info msg="StartContainer for \"aa508a94602c99c5f12b8a20a5eea12ecd19700e86a7d6f829f8e9d2a5b80389\"" Aug 13 00:22:25.919316 systemd[1]: Started cri-containerd-9bed0b7b6e4b482cba9df9fb2a4f051aef4de72cc6c9dd384dca4a9daf7932aa.scope - libcontainer container 9bed0b7b6e4b482cba9df9fb2a4f051aef4de72cc6c9dd384dca4a9daf7932aa. Aug 13 00:22:25.943343 systemd[1]: Started cri-containerd-aa508a94602c99c5f12b8a20a5eea12ecd19700e86a7d6f829f8e9d2a5b80389.scope - libcontainer container aa508a94602c99c5f12b8a20a5eea12ecd19700e86a7d6f829f8e9d2a5b80389. Aug 13 00:22:25.949042 systemd[1]: Started cri-containerd-54a8d520aa5e8cf7f6879accc5579b197406d5f519c2b50db2d872d85902844d.scope - libcontainer container 54a8d520aa5e8cf7f6879accc5579b197406d5f519c2b50db2d872d85902844d. Aug 13 00:22:25.971934 containerd[1443]: time="2025-08-13T00:22:25.969308230Z" level=info msg="StartContainer for \"9bed0b7b6e4b482cba9df9fb2a4f051aef4de72cc6c9dd384dca4a9daf7932aa\" returns successfully" Aug 13 00:22:25.990024 containerd[1443]: time="2025-08-13T00:22:25.989462062Z" level=info msg="StartContainer for \"54a8d520aa5e8cf7f6879accc5579b197406d5f519c2b50db2d872d85902844d\" returns successfully" Aug 13 00:22:26.017208 containerd[1443]: time="2025-08-13T00:22:26.006618901Z" level=info msg="StartContainer for \"aa508a94602c99c5f12b8a20a5eea12ecd19700e86a7d6f829f8e9d2a5b80389\" returns successfully" Aug 13 00:22:26.120929 kubelet[2118]: I0813 00:22:26.120901 2118 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:22:26.121633 kubelet[2118]: E0813 00:22:26.121518 2118 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Aug 13 00:22:26.422373 kubelet[2118]: E0813 00:22:26.422266 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:26.422706 kubelet[2118]: E0813 00:22:26.422387 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:26.424292 kubelet[2118]: E0813 00:22:26.424069 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:26.424292 kubelet[2118]: E0813 00:22:26.424197 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:26.425741 kubelet[2118]: E0813 00:22:26.425546 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:26.425741 kubelet[2118]: E0813 00:22:26.425639 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:27.401445 kubelet[2118]: E0813 00:22:27.401380 2118 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:22:27.429020 kubelet[2118]: E0813 00:22:27.428987 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:27.429384 kubelet[2118]: E0813 00:22:27.429150 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:27.429956 kubelet[2118]: E0813 00:22:27.429755 2118 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:22:27.430556 kubelet[2118]: E0813 00:22:27.429890 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:27.695507 kubelet[2118]: E0813 00:22:27.695402 2118 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 00:22:27.725861 kubelet[2118]: I0813 00:22:27.723759 2118 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:22:27.736057 kubelet[2118]: I0813 00:22:27.735880 2118 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:22:27.736057 kubelet[2118]: E0813 00:22:27.735918 2118 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:22:27.747496 kubelet[2118]: E0813 00:22:27.747456 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:27.847776 kubelet[2118]: E0813 00:22:27.847735 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:27.948701 kubelet[2118]: E0813 00:22:27.948583 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:28.049089 kubelet[2118]: E0813 00:22:28.049043 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:28.149539 kubelet[2118]: E0813 00:22:28.149495 2118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:28.292391 kubelet[2118]: I0813 00:22:28.292299 2118 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:28.306558 kubelet[2118]: I0813 00:22:28.306523 2118 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:22:28.311155 kubelet[2118]: I0813 00:22:28.311112 2118 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:28.385745 kubelet[2118]: I0813 00:22:28.385157 2118 apiserver.go:52] "Watching apiserver" Aug 13 00:22:28.389829 kubelet[2118]: E0813 00:22:28.389781 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:28.393207 kubelet[2118]: I0813 00:22:28.393010 2118 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:22:28.429353 kubelet[2118]: E0813 00:22:28.429322 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:28.429688 kubelet[2118]: E0813 00:22:28.429414 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:29.431174 kubelet[2118]: E0813 00:22:29.431120 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:29.667816 systemd[1]: Reloading requested from client PID 2397 ('systemctl') (unit session-7.scope)... Aug 13 00:22:29.667834 systemd[1]: Reloading... Aug 13 00:22:29.743111 zram_generator::config[2436]: No configuration found. Aug 13 00:22:29.843987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:22:29.912773 systemd[1]: Reloading finished in 244 ms. Aug 13 00:22:29.943531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:29.957060 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:22:29.957321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:29.957381 systemd[1]: kubelet.service: Consumed 1.620s CPU time, 127.1M memory peak, 0B memory swap peak. Aug 13 00:22:29.965435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:30.087167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:30.091666 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:22:30.137131 kubelet[2478]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:30.137131 kubelet[2478]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:22:30.137131 kubelet[2478]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:30.137444 kubelet[2478]: I0813 00:22:30.137227 2478 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:22:30.144124 kubelet[2478]: I0813 00:22:30.143698 2478 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:22:30.144124 kubelet[2478]: I0813 00:22:30.143722 2478 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:22:30.144124 kubelet[2478]: I0813 00:22:30.143958 2478 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:22:30.145431 kubelet[2478]: I0813 00:22:30.145412 2478 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:22:30.148151 kubelet[2478]: I0813 00:22:30.148115 2478 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:22:30.151070 kubelet[2478]: E0813 00:22:30.151044 2478 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:22:30.151153 kubelet[2478]: I0813 00:22:30.151071 2478 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:22:30.154100 kubelet[2478]: I0813 00:22:30.153608 2478 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:22:30.154100 kubelet[2478]: I0813 00:22:30.153802 2478 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:22:30.154100 kubelet[2478]: I0813 00:22:30.153827 2478 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:22:30.154100 kubelet[2478]: I0813 00:22:30.154063 2478 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:22:30.154290 kubelet[2478]: I0813 00:22:30.154072 2478 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:22:30.154290 kubelet[2478]: I0813 00:22:30.154140 2478 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:30.154290 kubelet[2478]: I0813 00:22:30.154268 2478 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:22:30.154290 kubelet[2478]: I0813 00:22:30.154279 2478 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:22:30.154367 kubelet[2478]: I0813 00:22:30.154296 2478 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:22:30.154367 kubelet[2478]: I0813 00:22:30.154307 2478 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:22:30.155919 kubelet[2478]: I0813 00:22:30.155769 2478 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:22:30.158198 kubelet[2478]: I0813 00:22:30.158174 2478 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:22:30.158745 kubelet[2478]: I0813 00:22:30.158725 2478 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:22:30.158883 kubelet[2478]: I0813 00:22:30.158872 2478 server.go:1287] "Started kubelet" Aug 13 00:22:30.159131 kubelet[2478]: I0813 00:22:30.159098 2478 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:22:30.159382 kubelet[2478]: I0813 00:22:30.159335 2478 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:22:30.159657 kubelet[2478]: I0813 00:22:30.159640 2478 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:22:30.159933 kubelet[2478]: I0813 00:22:30.159904 2478 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:22:30.162009 kubelet[2478]: I0813 00:22:30.161987 2478 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:22:30.170090 kubelet[2478]: I0813 00:22:30.165245 2478 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:22:30.170090 kubelet[2478]: E0813 00:22:30.165765 2478 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:22:30.170090 kubelet[2478]: I0813 00:22:30.165801 2478 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:22:30.170090 kubelet[2478]: I0813 00:22:30.165964 2478 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:22:30.170090 kubelet[2478]: I0813 00:22:30.166130 2478 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:22:30.170090 kubelet[2478]: E0813 00:22:30.166684 2478 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:22:30.176396 kubelet[2478]: I0813 00:22:30.175769 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:22:30.177172 kubelet[2478]: I0813 00:22:30.177071 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:22:30.177172 kubelet[2478]: I0813 00:22:30.177158 2478 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:22:30.177282 kubelet[2478]: I0813 00:22:30.177179 2478 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:22:30.177282 kubelet[2478]: I0813 00:22:30.177208 2478 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:22:30.177282 kubelet[2478]: E0813 00:22:30.177252 2478 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:22:30.187675 kubelet[2478]: I0813 00:22:30.187639 2478 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:22:30.187675 kubelet[2478]: I0813 00:22:30.187668 2478 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:22:30.187787 kubelet[2478]: I0813 00:22:30.187750 2478 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:22:30.216639 kubelet[2478]: I0813 00:22:30.216577 2478 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:22:30.216639 kubelet[2478]: I0813 00:22:30.216626 2478 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:22:30.216639 kubelet[2478]: I0813 00:22:30.216659 2478 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:30.217304 kubelet[2478]: I0813 00:22:30.216834 2478 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:22:30.217304 kubelet[2478]: I0813 00:22:30.216860 2478 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:22:30.217304 kubelet[2478]: I0813 00:22:30.216879 2478 policy_none.go:49] "None policy: Start" Aug 13 00:22:30.217304 kubelet[2478]: I0813 00:22:30.216888 2478 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:22:30.217304 kubelet[2478]: I0813 00:22:30.216922 2478 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:22:30.217304 kubelet[2478]: I0813 00:22:30.217200 2478 state_mem.go:75] "Updated machine memory state" Aug 13 00:22:30.221551 kubelet[2478]: I0813 00:22:30.221520 2478 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:22:30.221734 kubelet[2478]: I0813 00:22:30.221707 2478 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:22:30.221765 kubelet[2478]: I0813 00:22:30.221728 2478 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:22:30.221924 kubelet[2478]: I0813 00:22:30.221901 2478 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:22:30.222861 kubelet[2478]: E0813 00:22:30.222830 2478 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:22:30.278246 kubelet[2478]: I0813 00:22:30.278203 2478 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:30.278246 kubelet[2478]: I0813 00:22:30.278244 2478 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:22:30.278424 kubelet[2478]: I0813 00:22:30.278299 2478 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:30.285283 kubelet[2478]: E0813 00:22:30.285218 2478 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:22:30.285391 kubelet[2478]: E0813 00:22:30.285369 2478 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:30.285635 kubelet[2478]: E0813 00:22:30.285606 2478 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:30.326359 kubelet[2478]: I0813 00:22:30.326316 2478 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:22:30.336596 kubelet[2478]: I0813 00:22:30.336407 2478 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 00:22:30.336596 kubelet[2478]: I0813 00:22:30.336489 2478 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:22:30.467726 kubelet[2478]: I0813 00:22:30.467688 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:30.467726 kubelet[2478]: I0813 00:22:30.467726 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f01b8099cd47e82d47d8a9334afdd21-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f01b8099cd47e82d47d8a9334afdd21\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:30.467966 kubelet[2478]: I0813 00:22:30.467750 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f01b8099cd47e82d47d8a9334afdd21-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f01b8099cd47e82d47d8a9334afdd21\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:30.467966 kubelet[2478]: I0813 00:22:30.467773 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f01b8099cd47e82d47d8a9334afdd21-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f01b8099cd47e82d47d8a9334afdd21\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:30.467966 kubelet[2478]: I0813 00:22:30.467797 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:30.467966 kubelet[2478]: I0813 00:22:30.467814 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:30.467966 kubelet[2478]: I0813 00:22:30.467829 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:30.468110 kubelet[2478]: I0813 00:22:30.467847 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:22:30.468110 kubelet[2478]: I0813 00:22:30.467862 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:22:30.585636 kubelet[2478]: E0813 00:22:30.585537 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:30.585636 kubelet[2478]: E0813 00:22:30.585573 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:30.586300 kubelet[2478]: E0813 00:22:30.586270 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:31.155621 kubelet[2478]: I0813 00:22:31.155579 2478 apiserver.go:52] "Watching apiserver" Aug 13 00:22:31.166625 kubelet[2478]: I0813 00:22:31.166588 2478 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:22:31.199585 kubelet[2478]: I0813 00:22:31.199452 2478 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:31.199720 kubelet[2478]: E0813 00:22:31.199675 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:31.199982 kubelet[2478]: E0813 00:22:31.199960 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:31.205346 kubelet[2478]: E0813 00:22:31.205308 2478 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:22:31.205482 kubelet[2478]: E0813 00:22:31.205460 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:31.221502 kubelet[2478]: I0813 00:22:31.221363 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.221348186 podStartE2EDuration="3.221348186s" podCreationTimestamp="2025-08-13 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:31.220785213 +0000 UTC m=+1.125881768" watchObservedRunningTime="2025-08-13 00:22:31.221348186 +0000 UTC m=+1.126444701" Aug 13 00:22:31.235213 kubelet[2478]: I0813 00:22:31.235050 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.235033717 podStartE2EDuration="3.235033717s" podCreationTimestamp="2025-08-13 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:31.228230416 +0000 UTC m=+1.133326971" watchObservedRunningTime="2025-08-13 00:22:31.235033717 +0000 UTC m=+1.140130271" Aug 13 00:22:31.242924 kubelet[2478]: I0813 00:22:31.242491 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.24247552 podStartE2EDuration="3.24247552s" podCreationTimestamp="2025-08-13 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:31.235308486 +0000 UTC m=+1.140405041" watchObservedRunningTime="2025-08-13 00:22:31.24247552 +0000 UTC m=+1.147572035" Aug 13 00:22:32.202072 kubelet[2478]: E0813 00:22:32.201471 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:32.202072 kubelet[2478]: E0813 00:22:32.201558 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:32.202072 kubelet[2478]: E0813 00:22:32.201782 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:33.203236 kubelet[2478]: E0813 00:22:33.203210 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:36.899633 kubelet[2478]: I0813 00:22:36.899599 2478 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:22:36.900367 containerd[1443]: time="2025-08-13T00:22:36.900332509Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:22:36.900722 kubelet[2478]: I0813 00:22:36.900703 2478 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:22:37.567280 systemd[1]: Created slice kubepods-besteffort-podc0d44c85_b6d1_414e_9a68_d2c62a15b0b9.slice - libcontainer container kubepods-besteffort-podc0d44c85_b6d1_414e_9a68_d2c62a15b0b9.slice. Aug 13 00:22:37.615447 kubelet[2478]: I0813 00:22:37.615403 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0d44c85-b6d1-414e-9a68-d2c62a15b0b9-kube-proxy\") pod \"kube-proxy-25hf7\" (UID: \"c0d44c85-b6d1-414e-9a68-d2c62a15b0b9\") " pod="kube-system/kube-proxy-25hf7" Aug 13 00:22:37.615447 kubelet[2478]: I0813 00:22:37.615446 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0d44c85-b6d1-414e-9a68-d2c62a15b0b9-lib-modules\") pod \"kube-proxy-25hf7\" (UID: \"c0d44c85-b6d1-414e-9a68-d2c62a15b0b9\") " pod="kube-system/kube-proxy-25hf7" Aug 13 00:22:37.615618 kubelet[2478]: I0813 00:22:37.615466 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zmx\" (UniqueName: \"kubernetes.io/projected/c0d44c85-b6d1-414e-9a68-d2c62a15b0b9-kube-api-access-p7zmx\") pod \"kube-proxy-25hf7\" (UID: \"c0d44c85-b6d1-414e-9a68-d2c62a15b0b9\") " pod="kube-system/kube-proxy-25hf7" Aug 13 00:22:37.615618 kubelet[2478]: I0813 00:22:37.615491 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0d44c85-b6d1-414e-9a68-d2c62a15b0b9-xtables-lock\") pod \"kube-proxy-25hf7\" (UID: \"c0d44c85-b6d1-414e-9a68-d2c62a15b0b9\") " pod="kube-system/kube-proxy-25hf7" Aug 13 00:22:37.723838 kubelet[2478]: E0813 00:22:37.723798 2478 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:22:37.723838 kubelet[2478]: E0813 00:22:37.723826 2478 projected.go:194] Error preparing data for projected volume kube-api-access-p7zmx for pod kube-system/kube-proxy-25hf7: configmap "kube-root-ca.crt" not found Aug 13 00:22:37.723985 kubelet[2478]: E0813 00:22:37.723878 2478 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0d44c85-b6d1-414e-9a68-d2c62a15b0b9-kube-api-access-p7zmx podName:c0d44c85-b6d1-414e-9a68-d2c62a15b0b9 nodeName:}" failed. No retries permitted until 2025-08-13 00:22:38.223859161 +0000 UTC m=+8.128955676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7zmx" (UniqueName: "kubernetes.io/projected/c0d44c85-b6d1-414e-9a68-d2c62a15b0b9-kube-api-access-p7zmx") pod "kube-proxy-25hf7" (UID: "c0d44c85-b6d1-414e-9a68-d2c62a15b0b9") : configmap "kube-root-ca.crt" not found Aug 13 00:22:37.938956 systemd[1]: Created slice kubepods-besteffort-poddab6524d_28b0_42c4_9134_56f00d36a04f.slice - libcontainer container kubepods-besteffort-poddab6524d_28b0_42c4_9134_56f00d36a04f.slice. Aug 13 00:22:38.017099 kubelet[2478]: I0813 00:22:38.017034 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dab6524d-28b0-42c4-9134-56f00d36a04f-var-lib-calico\") pod \"tigera-operator-747864d56d-k7g7t\" (UID: \"dab6524d-28b0-42c4-9134-56f00d36a04f\") " pod="tigera-operator/tigera-operator-747864d56d-k7g7t" Aug 13 00:22:38.017099 kubelet[2478]: I0813 00:22:38.017100 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkzn\" (UniqueName: \"kubernetes.io/projected/dab6524d-28b0-42c4-9134-56f00d36a04f-kube-api-access-8jkzn\") pod \"tigera-operator-747864d56d-k7g7t\" (UID: \"dab6524d-28b0-42c4-9134-56f00d36a04f\") " pod="tigera-operator/tigera-operator-747864d56d-k7g7t" Aug 13 00:22:38.242309 containerd[1443]: time="2025-08-13T00:22:38.242254888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-k7g7t,Uid:dab6524d-28b0-42c4-9134-56f00d36a04f,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:22:38.267275 containerd[1443]: time="2025-08-13T00:22:38.266797092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:38.267275 containerd[1443]: time="2025-08-13T00:22:38.267235142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:38.267275 containerd[1443]: time="2025-08-13T00:22:38.267248903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:38.267490 containerd[1443]: time="2025-08-13T00:22:38.267331025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:38.286458 systemd[1]: Started cri-containerd-b9b150eee811312eaf1be730b992df1a22d8b2ffdc51505ff35f0804440456f8.scope - libcontainer container b9b150eee811312eaf1be730b992df1a22d8b2ffdc51505ff35f0804440456f8. Aug 13 00:22:38.315257 containerd[1443]: time="2025-08-13T00:22:38.315218482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-k7g7t,Uid:dab6524d-28b0-42c4-9134-56f00d36a04f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b9b150eee811312eaf1be730b992df1a22d8b2ffdc51505ff35f0804440456f8\"" Aug 13 00:22:38.323155 containerd[1443]: time="2025-08-13T00:22:38.322142652Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:22:38.488374 kubelet[2478]: E0813 00:22:38.488324 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:38.488873 containerd[1443]: time="2025-08-13T00:22:38.488832910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25hf7,Uid:c0d44c85-b6d1-414e-9a68-d2c62a15b0b9,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:38.517579 containerd[1443]: time="2025-08-13T00:22:38.516849998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:38.517579 containerd[1443]: time="2025-08-13T00:22:38.517482694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:38.517579 containerd[1443]: time="2025-08-13T00:22:38.517500334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:38.517789 containerd[1443]: time="2025-08-13T00:22:38.517606897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:38.538308 systemd[1]: Started cri-containerd-30a96e3f898988dae5b748c651e2f0de0884e4022c7c0b3e1452575098cd7d2a.scope - libcontainer container 30a96e3f898988dae5b748c651e2f0de0884e4022c7c0b3e1452575098cd7d2a. Aug 13 00:22:38.562029 containerd[1443]: time="2025-08-13T00:22:38.561987988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25hf7,Uid:c0d44c85-b6d1-414e-9a68-d2c62a15b0b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"30a96e3f898988dae5b748c651e2f0de0884e4022c7c0b3e1452575098cd7d2a\"" Aug 13 00:22:38.563099 kubelet[2478]: E0813 00:22:38.562932 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:38.565426 containerd[1443]: time="2025-08-13T00:22:38.565380391Z" level=info msg="CreateContainer within sandbox \"30a96e3f898988dae5b748c651e2f0de0884e4022c7c0b3e1452575098cd7d2a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:22:38.586161 containerd[1443]: time="2025-08-13T00:22:38.586035979Z" level=info msg="CreateContainer within sandbox \"30a96e3f898988dae5b748c651e2f0de0884e4022c7c0b3e1452575098cd7d2a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4f72ee04c1431b58ab97eff4df43f25df7c8e41d720cd0b16f927ea8f3509d8\"" Aug 13 00:22:38.586730 containerd[1443]: time="2025-08-13T00:22:38.586694235Z" level=info msg="StartContainer for \"b4f72ee04c1431b58ab97eff4df43f25df7c8e41d720cd0b16f927ea8f3509d8\"" Aug 13 00:22:38.618328 systemd[1]: Started cri-containerd-b4f72ee04c1431b58ab97eff4df43f25df7c8e41d720cd0b16f927ea8f3509d8.scope - libcontainer container b4f72ee04c1431b58ab97eff4df43f25df7c8e41d720cd0b16f927ea8f3509d8. Aug 13 00:22:38.648405 containerd[1443]: time="2025-08-13T00:22:38.648352591Z" level=info msg="StartContainer for \"b4f72ee04c1431b58ab97eff4df43f25df7c8e41d720cd0b16f927ea8f3509d8\" returns successfully" Aug 13 00:22:39.214322 kubelet[2478]: E0813 00:22:39.214146 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:39.224771 kubelet[2478]: I0813 00:22:39.224511 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-25hf7" podStartSLOduration=2.224492703 podStartE2EDuration="2.224492703s" podCreationTimestamp="2025-08-13 00:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:39.224323419 +0000 UTC m=+9.129419934" watchObservedRunningTime="2025-08-13 00:22:39.224492703 +0000 UTC m=+9.129589258" Aug 13 00:22:40.556222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143450215.mount: Deactivated successfully. Aug 13 00:22:40.570002 kubelet[2478]: E0813 00:22:40.569908 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:41.223307 kubelet[2478]: E0813 00:22:41.223252 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:41.920030 kubelet[2478]: E0813 00:22:41.919954 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:42.046959 kubelet[2478]: E0813 00:22:42.046924 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:42.224433 kubelet[2478]: E0813 00:22:42.224391 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:42.225246 kubelet[2478]: E0813 00:22:42.224972 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:22:45.098763 update_engine[1428]: I20250813 00:22:45.098681 1428 update_attempter.cc:509] Updating boot flags... Aug 13 00:22:45.119110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2790) Aug 13 00:22:45.155594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2793) Aug 13 00:22:52.224818 containerd[1443]: time="2025-08-13T00:22:52.224396283Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:52.225503 containerd[1443]: time="2025-08-13T00:22:52.224878529Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:22:52.225675 containerd[1443]: time="2025-08-13T00:22:52.225648138Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:52.228767 containerd[1443]: time="2025-08-13T00:22:52.228535253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:52.229624 containerd[1443]: time="2025-08-13T00:22:52.229505425Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 13.907293851s" Aug 13 00:22:52.229624 containerd[1443]: time="2025-08-13T00:22:52.229539466Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:22:52.234534 containerd[1443]: time="2025-08-13T00:22:52.234494686Z" level=info msg="CreateContainer within sandbox \"b9b150eee811312eaf1be730b992df1a22d8b2ffdc51505ff35f0804440456f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:22:52.269294 containerd[1443]: time="2025-08-13T00:22:52.269226351Z" level=info msg="CreateContainer within sandbox \"b9b150eee811312eaf1be730b992df1a22d8b2ffdc51505ff35f0804440456f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f80c803032f837cc2193bd3fce5463be070a3e47975307e8758d6d1e1f198f2\"" Aug 13 00:22:52.270198 containerd[1443]: time="2025-08-13T00:22:52.269748077Z" level=info msg="StartContainer for \"8f80c803032f837cc2193bd3fce5463be070a3e47975307e8758d6d1e1f198f2\"" Aug 13 00:22:52.300341 systemd[1]: Started cri-containerd-8f80c803032f837cc2193bd3fce5463be070a3e47975307e8758d6d1e1f198f2.scope - libcontainer container 8f80c803032f837cc2193bd3fce5463be070a3e47975307e8758d6d1e1f198f2. Aug 13 00:22:52.322001 containerd[1443]: time="2025-08-13T00:22:52.320701741Z" level=info msg="StartContainer for \"8f80c803032f837cc2193bd3fce5463be070a3e47975307e8758d6d1e1f198f2\" returns successfully" Aug 13 00:22:53.259225 kubelet[2478]: I0813 00:22:53.259113 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-k7g7t" podStartSLOduration=2.344953674 podStartE2EDuration="16.259072925s" podCreationTimestamp="2025-08-13 00:22:37 +0000 UTC" firstStartedPulling="2025-08-13 00:22:38.319282622 +0000 UTC m=+8.224379177" lastFinishedPulling="2025-08-13 00:22:52.233401873 +0000 UTC m=+22.138498428" observedRunningTime="2025-08-13 00:22:53.258912683 +0000 UTC m=+23.164009238" watchObservedRunningTime="2025-08-13 00:22:53.259072925 +0000 UTC m=+23.164169440" Aug 13 00:22:57.830364 sudo[1623]: pam_unix(sudo:session): session closed for user root Aug 13 00:22:57.836281 sshd[1620]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:57.844324 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:22:57.844549 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:52238.service: Deactivated successfully. Aug 13 00:22:57.847859 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:22:57.848032 systemd[1]: session-7.scope: Consumed 7.613s CPU time, 156.3M memory peak, 0B memory swap peak. Aug 13 00:22:57.849121 systemd-logind[1425]: Removed session 7. Aug 13 00:23:01.428920 systemd[1]: Created slice kubepods-besteffort-pod516fffa7_b74c_40be_8118_4848f7c3ca74.slice - libcontainer container kubepods-besteffort-pod516fffa7_b74c_40be_8118_4848f7c3ca74.slice. Aug 13 00:23:01.475576 kubelet[2478]: I0813 00:23:01.475528 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/516fffa7-b74c-40be-8118-4848f7c3ca74-typha-certs\") pod \"calico-typha-5c6dd6c6c5-4v9l4\" (UID: \"516fffa7-b74c-40be-8118-4848f7c3ca74\") " pod="calico-system/calico-typha-5c6dd6c6c5-4v9l4" Aug 13 00:23:01.475576 kubelet[2478]: I0813 00:23:01.475581 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4zs\" (UniqueName: \"kubernetes.io/projected/516fffa7-b74c-40be-8118-4848f7c3ca74-kube-api-access-pv4zs\") pod \"calico-typha-5c6dd6c6c5-4v9l4\" (UID: \"516fffa7-b74c-40be-8118-4848f7c3ca74\") " pod="calico-system/calico-typha-5c6dd6c6c5-4v9l4" Aug 13 00:23:01.476007 kubelet[2478]: I0813 00:23:01.475603 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/516fffa7-b74c-40be-8118-4848f7c3ca74-tigera-ca-bundle\") pod \"calico-typha-5c6dd6c6c5-4v9l4\" (UID: \"516fffa7-b74c-40be-8118-4848f7c3ca74\") " pod="calico-system/calico-typha-5c6dd6c6c5-4v9l4" Aug 13 00:23:01.607306 systemd[1]: Created slice kubepods-besteffort-podbd32b894_aa15_43f0_af0c_4476d7564744.slice - libcontainer container kubepods-besteffort-podbd32b894_aa15_43f0_af0c_4476d7564744.slice. Aug 13 00:23:01.677258 kubelet[2478]: I0813 00:23:01.677200 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-cni-net-dir\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677258 kubelet[2478]: I0813 00:23:01.677252 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd32b894-aa15-43f0-af0c-4476d7564744-tigera-ca-bundle\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677423 kubelet[2478]: I0813 00:23:01.677272 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bd32b894-aa15-43f0-af0c-4476d7564744-node-certs\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677423 kubelet[2478]: I0813 00:23:01.677288 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-policysync\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677423 kubelet[2478]: I0813 00:23:01.677304 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-cni-log-dir\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677423 kubelet[2478]: I0813 00:23:01.677326 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-flexvol-driver-host\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677423 kubelet[2478]: I0813 00:23:01.677347 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-xtables-lock\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677542 kubelet[2478]: I0813 00:23:01.677364 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-cni-bin-dir\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677542 kubelet[2478]: I0813 00:23:01.677381 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-var-lib-calico\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677542 kubelet[2478]: I0813 00:23:01.677402 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-lib-modules\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677542 kubelet[2478]: I0813 00:23:01.677419 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bd32b894-aa15-43f0-af0c-4476d7564744-var-run-calico\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.677542 kubelet[2478]: I0813 00:23:01.677434 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7992p\" (UniqueName: \"kubernetes.io/projected/bd32b894-aa15-43f0-af0c-4476d7564744-kube-api-access-7992p\") pod \"calico-node-dw7dr\" (UID: \"bd32b894-aa15-43f0-af0c-4476d7564744\") " pod="calico-system/calico-node-dw7dr" Aug 13 00:23:01.733538 kubelet[2478]: E0813 00:23:01.733495 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:01.742179 containerd[1443]: time="2025-08-13T00:23:01.742136882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6dd6c6c5-4v9l4,Uid:516fffa7-b74c-40be-8118-4848f7c3ca74,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:01.814396 kubelet[2478]: E0813 00:23:01.814366 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.814396 kubelet[2478]: W0813 00:23:01.814389 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.816244 kubelet[2478]: E0813 00:23:01.816211 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.850269 kubelet[2478]: E0813 00:23:01.850217 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdctl" podUID="4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77" Aug 13 00:23:01.855580 containerd[1443]: time="2025-08-13T00:23:01.854683522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:01.855580 containerd[1443]: time="2025-08-13T00:23:01.854770683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:01.855580 containerd[1443]: time="2025-08-13T00:23:01.854782363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:01.857098 containerd[1443]: time="2025-08-13T00:23:01.854917724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:01.865510 kubelet[2478]: E0813 00:23:01.864593 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.865510 kubelet[2478]: W0813 00:23:01.864616 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.865510 kubelet[2478]: E0813 00:23:01.864638 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.865510 kubelet[2478]: E0813 00:23:01.864824 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.869975 kubelet[2478]: W0813 00:23:01.864833 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.869975 kubelet[2478]: E0813 00:23:01.869981 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.871845 kubelet[2478]: E0813 00:23:01.871303 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.871845 kubelet[2478]: W0813 00:23:01.871328 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.871845 kubelet[2478]: E0813 00:23:01.871346 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.871845 kubelet[2478]: E0813 00:23:01.871665 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.871845 kubelet[2478]: W0813 00:23:01.871672 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.871845 kubelet[2478]: E0813 00:23:01.871683 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.871998 kubelet[2478]: E0813 00:23:01.871876 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.871998 kubelet[2478]: W0813 00:23:01.871885 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.871998 kubelet[2478]: E0813 00:23:01.871895 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.872729 kubelet[2478]: E0813 00:23:01.872655 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.872729 kubelet[2478]: W0813 00:23:01.872675 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.872729 kubelet[2478]: E0813 00:23:01.872689 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.874443 kubelet[2478]: E0813 00:23:01.874417 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.874443 kubelet[2478]: W0813 00:23:01.874442 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.874550 kubelet[2478]: E0813 00:23:01.874458 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.877002 kubelet[2478]: E0813 00:23:01.876982 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.877002 kubelet[2478]: W0813 00:23:01.876999 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.877130 kubelet[2478]: E0813 00:23:01.877013 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.877274 kubelet[2478]: E0813 00:23:01.877248 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.877274 kubelet[2478]: W0813 00:23:01.877263 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.877493 kubelet[2478]: E0813 00:23:01.877276 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.877616 kubelet[2478]: E0813 00:23:01.877588 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.877616 kubelet[2478]: W0813 00:23:01.877603 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.877688 kubelet[2478]: E0813 00:23:01.877618 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.877807 kubelet[2478]: E0813 00:23:01.877757 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.877807 kubelet[2478]: W0813 00:23:01.877771 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.877807 kubelet[2478]: E0813 00:23:01.877786 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.878107 kubelet[2478]: E0813 00:23:01.878093 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.878150 kubelet[2478]: W0813 00:23:01.878106 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.878150 kubelet[2478]: E0813 00:23:01.878121 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.878366 kubelet[2478]: E0813 00:23:01.878291 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.878366 kubelet[2478]: W0813 00:23:01.878306 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.878366 kubelet[2478]: E0813 00:23:01.878327 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.879137 kubelet[2478]: E0813 00:23:01.878466 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.879137 kubelet[2478]: W0813 00:23:01.878480 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.879137 kubelet[2478]: E0813 00:23:01.878489 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.879137 kubelet[2478]: E0813 00:23:01.878612 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.879137 kubelet[2478]: W0813 00:23:01.878618 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.879137 kubelet[2478]: E0813 00:23:01.878626 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.879137 kubelet[2478]: E0813 00:23:01.878890 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.879137 kubelet[2478]: W0813 00:23:01.878898 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.879137 kubelet[2478]: E0813 00:23:01.878907 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.880016 kubelet[2478]: E0813 00:23:01.879156 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.880016 kubelet[2478]: W0813 00:23:01.879165 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.880016 kubelet[2478]: E0813 00:23:01.879174 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.880016 kubelet[2478]: E0813 00:23:01.879372 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.880016 kubelet[2478]: W0813 00:23:01.879381 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.880016 kubelet[2478]: E0813 00:23:01.879390 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.880016 kubelet[2478]: E0813 00:23:01.879637 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.880016 kubelet[2478]: W0813 00:23:01.879645 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.880016 kubelet[2478]: E0813 00:23:01.879654 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.880016 kubelet[2478]: E0813 00:23:01.879882 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.880419 kubelet[2478]: W0813 00:23:01.879891 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.880419 kubelet[2478]: E0813 00:23:01.879903 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.880971 systemd[1]: Started cri-containerd-204f74d50e4d80b75b5ddfc95e73c272d39e9c2fb2a06854ca6b7522519edf2e.scope - libcontainer container 204f74d50e4d80b75b5ddfc95e73c272d39e9c2fb2a06854ca6b7522519edf2e. Aug 13 00:23:01.883821 kubelet[2478]: E0813 00:23:01.883131 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.883821 kubelet[2478]: W0813 00:23:01.883154 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.883821 kubelet[2478]: E0813 00:23:01.883184 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.883821 kubelet[2478]: I0813 00:23:01.883210 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77-kubelet-dir\") pod \"csi-node-driver-xdctl\" (UID: \"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77\") " pod="calico-system/csi-node-driver-xdctl" Aug 13 00:23:01.884858 kubelet[2478]: E0813 00:23:01.884466 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.884858 kubelet[2478]: W0813 00:23:01.884493 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.884858 kubelet[2478]: E0813 00:23:01.884510 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.884858 kubelet[2478]: I0813 00:23:01.884536 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77-varrun\") pod \"csi-node-driver-xdctl\" (UID: \"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77\") " pod="calico-system/csi-node-driver-xdctl" Aug 13 00:23:01.884858 kubelet[2478]: E0813 00:23:01.884765 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.884858 kubelet[2478]: W0813 00:23:01.884777 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.884858 kubelet[2478]: E0813 00:23:01.884807 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.884858 kubelet[2478]: I0813 00:23:01.884826 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77-socket-dir\") pod \"csi-node-driver-xdctl\" (UID: \"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77\") " pod="calico-system/csi-node-driver-xdctl" Aug 13 00:23:01.885210 kubelet[2478]: E0813 00:23:01.885011 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.885210 kubelet[2478]: W0813 00:23:01.885021 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.885210 kubelet[2478]: E0813 00:23:01.885049 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.885210 kubelet[2478]: I0813 00:23:01.885066 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67l7f\" (UniqueName: \"kubernetes.io/projected/4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77-kube-api-access-67l7f\") pod \"csi-node-driver-xdctl\" (UID: \"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77\") " pod="calico-system/csi-node-driver-xdctl" Aug 13 00:23:01.885490 kubelet[2478]: E0813 00:23:01.885339 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.885490 kubelet[2478]: W0813 00:23:01.885352 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.885616 kubelet[2478]: E0813 00:23:01.885555 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.885616 kubelet[2478]: W0813 00:23:01.885572 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.885616 kubelet[2478]: E0813 00:23:01.885402 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.885800 kubelet[2478]: I0813 00:23:01.885736 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77-registration-dir\") pod \"csi-node-driver-xdctl\" (UID: \"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77\") " pod="calico-system/csi-node-driver-xdctl" Aug 13 00:23:01.885800 kubelet[2478]: E0813 00:23:01.885748 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.885800 kubelet[2478]: E0813 00:23:01.885741 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.885800 kubelet[2478]: W0813 00:23:01.885772 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.886019 kubelet[2478]: E0813 00:23:01.885785 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.886019 kubelet[2478]: E0813 00:23:01.885967 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.886019 kubelet[2478]: W0813 00:23:01.885978 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.886019 kubelet[2478]: E0813 00:23:01.885994 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.890303 kubelet[2478]: E0813 00:23:01.890278 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.890303 kubelet[2478]: W0813 00:23:01.890300 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.890448 kubelet[2478]: E0813 00:23:01.890338 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.891022 kubelet[2478]: E0813 00:23:01.890997 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.891022 kubelet[2478]: W0813 00:23:01.891022 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.891160 kubelet[2478]: E0813 00:23:01.891038 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.891617 kubelet[2478]: E0813 00:23:01.891588 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.891617 kubelet[2478]: W0813 00:23:01.891600 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.891617 kubelet[2478]: E0813 00:23:01.891624 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.891852 kubelet[2478]: E0813 00:23:01.891840 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.891903 kubelet[2478]: W0813 00:23:01.891852 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.891903 kubelet[2478]: E0813 00:23:01.891868 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.892071 kubelet[2478]: E0813 00:23:01.892055 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.892071 kubelet[2478]: W0813 00:23:01.892067 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.892071 kubelet[2478]: E0813 00:23:01.892096 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.892376 kubelet[2478]: E0813 00:23:01.892287 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.892376 kubelet[2478]: W0813 00:23:01.892301 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.892376 kubelet[2478]: E0813 00:23:01.892310 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.893222 kubelet[2478]: E0813 00:23:01.893186 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.893222 kubelet[2478]: W0813 00:23:01.893203 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.893222 kubelet[2478]: E0813 00:23:01.893215 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.911720 containerd[1443]: time="2025-08-13T00:23:01.911660368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dw7dr,Uid:bd32b894-aa15-43f0-af0c-4476d7564744,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:01.926889 containerd[1443]: time="2025-08-13T00:23:01.926832377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6dd6c6c5-4v9l4,Uid:516fffa7-b74c-40be-8118-4848f7c3ca74,Namespace:calico-system,Attempt:0,} returns sandbox id \"204f74d50e4d80b75b5ddfc95e73c272d39e9c2fb2a06854ca6b7522519edf2e\"" Aug 13 00:23:01.927582 kubelet[2478]: E0813 00:23:01.927550 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:01.928279 containerd[1443]: time="2025-08-13T00:23:01.928249709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:23:01.986840 kubelet[2478]: E0813 00:23:01.986670 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.986840 kubelet[2478]: W0813 00:23:01.986692 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.986840 kubelet[2478]: E0813 00:23:01.986711 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.987927 kubelet[2478]: E0813 00:23:01.987910 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.988112 kubelet[2478]: W0813 00:23:01.987927 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.988112 kubelet[2478]: E0813 00:23:01.987946 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.988277 kubelet[2478]: E0813 00:23:01.988260 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.988277 kubelet[2478]: W0813 00:23:01.988294 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.988277 kubelet[2478]: E0813 00:23:01.988316 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.988724 kubelet[2478]: E0813 00:23:01.988630 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.988724 kubelet[2478]: W0813 00:23:01.988648 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.988724 kubelet[2478]: E0813 00:23:01.988668 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.988991 kubelet[2478]: E0813 00:23:01.988958 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.988991 kubelet[2478]: W0813 00:23:01.988971 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.989276 kubelet[2478]: E0813 00:23:01.989193 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.989700 kubelet[2478]: E0813 00:23:01.989546 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.989700 kubelet[2478]: W0813 00:23:01.989564 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.989700 kubelet[2478]: E0813 00:23:01.989580 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.989910 kubelet[2478]: E0813 00:23:01.989874 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.989910 kubelet[2478]: W0813 00:23:01.989887 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.990111 kubelet[2478]: E0813 00:23:01.990092 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.990295 kubelet[2478]: E0813 00:23:01.990185 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.990392 kubelet[2478]: W0813 00:23:01.990341 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.990540 kubelet[2478]: E0813 00:23:01.990527 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.990825 kubelet[2478]: E0813 00:23:01.990760 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.990825 kubelet[2478]: W0813 00:23:01.990773 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.990825 kubelet[2478]: E0813 00:23:01.990785 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.991175 kubelet[2478]: E0813 00:23:01.991103 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.991175 kubelet[2478]: W0813 00:23:01.991116 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.991175 kubelet[2478]: E0813 00:23:01.991135 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.992116 kubelet[2478]: E0813 00:23:01.991498 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.992116 kubelet[2478]: W0813 00:23:01.991690 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.992116 kubelet[2478]: E0813 00:23:01.991737 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.993142 kubelet[2478]: E0813 00:23:01.992954 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.993142 kubelet[2478]: W0813 00:23:01.992970 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.993142 kubelet[2478]: E0813 00:23:01.993058 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.993375 kubelet[2478]: E0813 00:23:01.993345 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.993375 kubelet[2478]: W0813 00:23:01.993361 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.993702 kubelet[2478]: E0813 00:23:01.993487 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.993738 kubelet[2478]: E0813 00:23:01.993524 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.993738 kubelet[2478]: W0813 00:23:01.993725 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.993780 kubelet[2478]: E0813 00:23:01.993754 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.995122 kubelet[2478]: E0813 00:23:01.995009 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.995168 kubelet[2478]: W0813 00:23:01.995123 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.995468 kubelet[2478]: E0813 00:23:01.995451 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.997259 kubelet[2478]: E0813 00:23:01.997232 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.997259 kubelet[2478]: W0813 00:23:01.997256 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.997365 kubelet[2478]: E0813 00:23:01.997294 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.997560 kubelet[2478]: E0813 00:23:01.997542 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.997588 kubelet[2478]: W0813 00:23:01.997560 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.997615 kubelet[2478]: E0813 00:23:01.997585 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.997750 kubelet[2478]: E0813 00:23:01.997735 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.997779 kubelet[2478]: W0813 00:23:01.997750 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.997779 kubelet[2478]: E0813 00:23:01.997773 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.997988 kubelet[2478]: E0813 00:23:01.997971 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.997988 kubelet[2478]: W0813 00:23:01.997986 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.998054 kubelet[2478]: E0813 00:23:01.998027 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.998239 kubelet[2478]: E0813 00:23:01.998224 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.998267 kubelet[2478]: W0813 00:23:01.998238 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.998267 kubelet[2478]: E0813 00:23:01.998256 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.998474 kubelet[2478]: E0813 00:23:01.998458 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.998474 kubelet[2478]: W0813 00:23:01.998472 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.998530 kubelet[2478]: E0813 00:23:01.998486 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.998753 kubelet[2478]: E0813 00:23:01.998737 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.998801 kubelet[2478]: W0813 00:23:01.998784 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.998834 kubelet[2478]: E0813 00:23:01.998818 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.999011 kubelet[2478]: E0813 00:23:01.998996 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.999011 kubelet[2478]: W0813 00:23:01.999010 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.999063 kubelet[2478]: E0813 00:23:01.999031 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.999242 kubelet[2478]: E0813 00:23:01.999227 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.999242 kubelet[2478]: W0813 00:23:01.999240 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.999302 kubelet[2478]: E0813 00:23:01.999255 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.999452 kubelet[2478]: E0813 00:23:01.999436 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.999476 kubelet[2478]: W0813 00:23:01.999451 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.999476 kubelet[2478]: E0813 00:23:01.999460 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:02.107045 kubelet[2478]: E0813 00:23:02.107006 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:02.107045 kubelet[2478]: W0813 00:23:02.107028 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:02.107045 kubelet[2478]: E0813 00:23:02.107047 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:02.117052 containerd[1443]: time="2025-08-13T00:23:02.116872124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:02.117052 containerd[1443]: time="2025-08-13T00:23:02.116973325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:02.117052 containerd[1443]: time="2025-08-13T00:23:02.116991485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:02.117305 containerd[1443]: time="2025-08-13T00:23:02.117104046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:02.138236 systemd[1]: Started cri-containerd-8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba.scope - libcontainer container 8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba. Aug 13 00:23:02.163213 containerd[1443]: time="2025-08-13T00:23:02.162618780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dw7dr,Uid:bd32b894-aa15-43f0-af0c-4476d7564744,Namespace:calico-system,Attempt:0,} returns sandbox id \"8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba\"" Aug 13 00:23:02.936166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370338539.mount: Deactivated successfully. Aug 13 00:23:03.427450 containerd[1443]: time="2025-08-13T00:23:03.427392916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:03.428488 containerd[1443]: time="2025-08-13T00:23:03.428454484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:23:03.429906 containerd[1443]: time="2025-08-13T00:23:03.429268371Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:03.431846 containerd[1443]: time="2025-08-13T00:23:03.431807431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:03.432739 containerd[1443]: time="2025-08-13T00:23:03.432708718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.504423488s" Aug 13 00:23:03.432838 containerd[1443]: time="2025-08-13T00:23:03.432821959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:23:03.434107 containerd[1443]: time="2025-08-13T00:23:03.434046009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:23:03.449318 containerd[1443]: time="2025-08-13T00:23:03.448484204Z" level=info msg="CreateContainer within sandbox \"204f74d50e4d80b75b5ddfc95e73c272d39e9c2fb2a06854ca6b7522519edf2e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:23:03.457320 containerd[1443]: time="2025-08-13T00:23:03.457266033Z" level=info msg="CreateContainer within sandbox \"204f74d50e4d80b75b5ddfc95e73c272d39e9c2fb2a06854ca6b7522519edf2e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"51d00355769a689548767df7824ba05612c8b2fefd84e1ea93174b5dab2bde1e\"" Aug 13 00:23:03.458375 containerd[1443]: time="2025-08-13T00:23:03.458339402Z" level=info msg="StartContainer for \"51d00355769a689548767df7824ba05612c8b2fefd84e1ea93174b5dab2bde1e\"" Aug 13 00:23:03.486322 systemd[1]: Started cri-containerd-51d00355769a689548767df7824ba05612c8b2fefd84e1ea93174b5dab2bde1e.scope - libcontainer container 51d00355769a689548767df7824ba05612c8b2fefd84e1ea93174b5dab2bde1e. Aug 13 00:23:03.524342 containerd[1443]: time="2025-08-13T00:23:03.524277607Z" level=info msg="StartContainer for \"51d00355769a689548767df7824ba05612c8b2fefd84e1ea93174b5dab2bde1e\" returns successfully" Aug 13 00:23:04.178413 kubelet[2478]: E0813 00:23:04.178362 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdctl" podUID="4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77" Aug 13 00:23:04.285052 kubelet[2478]: E0813 00:23:04.284301 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:04.295515 kubelet[2478]: E0813 00:23:04.295474 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.295515 kubelet[2478]: W0813 00:23:04.295506 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.295787 kubelet[2478]: E0813 00:23:04.295528 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.296038 kubelet[2478]: E0813 00:23:04.295984 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.296038 kubelet[2478]: W0813 00:23:04.295997 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.296038 kubelet[2478]: E0813 00:23:04.296018 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.296365 kubelet[2478]: E0813 00:23:04.296314 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.296365 kubelet[2478]: W0813 00:23:04.296344 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.296365 kubelet[2478]: E0813 00:23:04.296357 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.296595 kubelet[2478]: E0813 00:23:04.296530 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.296595 kubelet[2478]: W0813 00:23:04.296567 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.296595 kubelet[2478]: E0813 00:23:04.296581 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.297055 kubelet[2478]: E0813 00:23:04.296871 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.297055 kubelet[2478]: W0813 00:23:04.296886 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.297055 kubelet[2478]: E0813 00:23:04.296897 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.297414 kubelet[2478]: E0813 00:23:04.297164 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.297414 kubelet[2478]: W0813 00:23:04.297174 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.297414 kubelet[2478]: E0813 00:23:04.297191 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.297624 kubelet[2478]: E0813 00:23:04.297519 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.297624 kubelet[2478]: W0813 00:23:04.297531 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.297624 kubelet[2478]: E0813 00:23:04.297541 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.298041 kubelet[2478]: E0813 00:23:04.297776 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.298041 kubelet[2478]: W0813 00:23:04.297788 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.298041 kubelet[2478]: E0813 00:23:04.297798 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.298522 kubelet[2478]: E0813 00:23:04.298490 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.299116 kubelet[2478]: W0813 00:23:04.298526 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.299116 kubelet[2478]: E0813 00:23:04.298540 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.299974 kubelet[2478]: E0813 00:23:04.299158 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.299974 kubelet[2478]: W0813 00:23:04.299183 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.299974 kubelet[2478]: E0813 00:23:04.299198 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.299974 kubelet[2478]: E0813 00:23:04.299499 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.299974 kubelet[2478]: W0813 00:23:04.299510 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.299974 kubelet[2478]: E0813 00:23:04.299539 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.300406 kubelet[2478]: E0813 00:23:04.300387 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.300406 kubelet[2478]: W0813 00:23:04.300403 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.300576 kubelet[2478]: E0813 00:23:04.300415 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.301163 kubelet[2478]: E0813 00:23:04.301143 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.301163 kubelet[2478]: W0813 00:23:04.301158 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.301265 kubelet[2478]: E0813 00:23:04.301171 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.301406 kubelet[2478]: E0813 00:23:04.301393 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.301406 kubelet[2478]: W0813 00:23:04.301404 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.301479 kubelet[2478]: E0813 00:23:04.301413 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.301566 kubelet[2478]: E0813 00:23:04.301555 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.301566 kubelet[2478]: W0813 00:23:04.301565 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.301615 kubelet[2478]: E0813 00:23:04.301573 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.307885 kubelet[2478]: I0813 00:23:04.307829 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c6dd6c6c5-4v9l4" podStartSLOduration=1.8022027440000001 podStartE2EDuration="3.307815042s" podCreationTimestamp="2025-08-13 00:23:01 +0000 UTC" firstStartedPulling="2025-08-13 00:23:01.927992347 +0000 UTC m=+31.833088902" lastFinishedPulling="2025-08-13 00:23:03.433604645 +0000 UTC m=+33.338701200" observedRunningTime="2025-08-13 00:23:04.297433882 +0000 UTC m=+34.202530517" watchObservedRunningTime="2025-08-13 00:23:04.307815042 +0000 UTC m=+34.212911597" Aug 13 00:23:04.310643 kubelet[2478]: E0813 00:23:04.310584 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.310643 kubelet[2478]: W0813 00:23:04.310601 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.310643 kubelet[2478]: E0813 00:23:04.310615 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.311313 kubelet[2478]: E0813 00:23:04.311295 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.311313 kubelet[2478]: W0813 00:23:04.311310 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.311313 kubelet[2478]: E0813 00:23:04.311323 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.312124 kubelet[2478]: E0813 00:23:04.312106 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.312124 kubelet[2478]: W0813 00:23:04.312122 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.312396 kubelet[2478]: E0813 00:23:04.312147 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.312892 kubelet[2478]: E0813 00:23:04.312876 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.312998 kubelet[2478]: W0813 00:23:04.312895 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.312998 kubelet[2478]: E0813 00:23:04.312972 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.313194 kubelet[2478]: E0813 00:23:04.313177 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.313194 kubelet[2478]: W0813 00:23:04.313190 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.313289 kubelet[2478]: E0813 00:23:04.313276 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.313564 kubelet[2478]: E0813 00:23:04.313504 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.313564 kubelet[2478]: W0813 00:23:04.313520 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.313780 kubelet[2478]: E0813 00:23:04.313642 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.314048 kubelet[2478]: E0813 00:23:04.314028 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.314048 kubelet[2478]: W0813 00:23:04.314043 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.314430 kubelet[2478]: E0813 00:23:04.314068 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.314430 kubelet[2478]: E0813 00:23:04.314413 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.314430 kubelet[2478]: W0813 00:23:04.314425 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.314807 kubelet[2478]: E0813 00:23:04.314443 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.314840 kubelet[2478]: E0813 00:23:04.314831 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.314864 kubelet[2478]: W0813 00:23:04.314845 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.315591 kubelet[2478]: E0813 00:23:04.314869 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.315591 kubelet[2478]: E0813 00:23:04.315137 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.315591 kubelet[2478]: W0813 00:23:04.315146 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.315591 kubelet[2478]: E0813 00:23:04.315191 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.315591 kubelet[2478]: E0813 00:23:04.315330 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.315591 kubelet[2478]: W0813 00:23:04.315344 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.315591 kubelet[2478]: E0813 00:23:04.315482 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.316306 kubelet[2478]: E0813 00:23:04.315946 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.316306 kubelet[2478]: W0813 00:23:04.315961 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.316306 kubelet[2478]: E0813 00:23:04.315986 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.317122 kubelet[2478]: E0813 00:23:04.317104 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.317122 kubelet[2478]: W0813 00:23:04.317119 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.317229 kubelet[2478]: E0813 00:23:04.317136 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.317617 kubelet[2478]: E0813 00:23:04.317601 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.317617 kubelet[2478]: W0813 00:23:04.317614 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.317714 kubelet[2478]: E0813 00:23:04.317666 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.318023 kubelet[2478]: E0813 00:23:04.318005 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.318023 kubelet[2478]: W0813 00:23:04.318019 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.318097 kubelet[2478]: E0813 00:23:04.318060 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.318257 kubelet[2478]: E0813 00:23:04.318244 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.318289 kubelet[2478]: W0813 00:23:04.318256 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.318289 kubelet[2478]: E0813 00:23:04.318273 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.318518 kubelet[2478]: E0813 00:23:04.318503 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.318518 kubelet[2478]: W0813 00:23:04.318516 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.318594 kubelet[2478]: E0813 00:23:04.318527 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.318859 kubelet[2478]: E0813 00:23:04.318846 2478 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:04.318859 kubelet[2478]: W0813 00:23:04.318858 2478 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:04.318963 kubelet[2478]: E0813 00:23:04.318868 2478 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:04.496520 containerd[1443]: time="2025-08-13T00:23:04.495784889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:04.496520 containerd[1443]: time="2025-08-13T00:23:04.496261773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:23:04.497159 containerd[1443]: time="2025-08-13T00:23:04.497128140Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:04.499168 containerd[1443]: time="2025-08-13T00:23:04.499130675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:04.500026 containerd[1443]: time="2025-08-13T00:23:04.499988082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.065907553s" Aug 13 00:23:04.500067 containerd[1443]: time="2025-08-13T00:23:04.500027882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:23:04.502554 containerd[1443]: time="2025-08-13T00:23:04.502512421Z" level=info msg="CreateContainer within sandbox \"8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:23:04.517052 containerd[1443]: time="2025-08-13T00:23:04.517003253Z" level=info msg="CreateContainer within sandbox \"8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f\"" Aug 13 00:23:04.518863 containerd[1443]: time="2025-08-13T00:23:04.517730978Z" level=info msg="StartContainer for \"5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f\"" Aug 13 00:23:04.552273 systemd[1]: Started cri-containerd-5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f.scope - libcontainer container 5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f. Aug 13 00:23:04.583431 containerd[1443]: time="2025-08-13T00:23:04.583377204Z" level=info msg="StartContainer for \"5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f\" returns successfully" Aug 13 00:23:04.592482 systemd[1]: run-containerd-runc-k8s.io-5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f-runc.UNnA9h.mount: Deactivated successfully. Aug 13 00:23:04.623964 systemd[1]: cri-containerd-5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f.scope: Deactivated successfully. Aug 13 00:23:04.645822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f-rootfs.mount: Deactivated successfully. Aug 13 00:23:04.755939 containerd[1443]: time="2025-08-13T00:23:04.751068575Z" level=info msg="shim disconnected" id=5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f namespace=k8s.io Aug 13 00:23:04.755939 containerd[1443]: time="2025-08-13T00:23:04.755860972Z" level=warning msg="cleaning up after shim disconnected" id=5314dba4dd7081b85d901f92c548ef6bf57d1bfd052ebc5aa012a6631d83c18f namespace=k8s.io Aug 13 00:23:04.755939 containerd[1443]: time="2025-08-13T00:23:04.755879172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:05.288103 kubelet[2478]: E0813 00:23:05.287193 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:05.289131 containerd[1443]: time="2025-08-13T00:23:05.288703244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:23:06.178224 kubelet[2478]: E0813 00:23:06.178169 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdctl" podUID="4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77" Aug 13 00:23:06.291117 kubelet[2478]: E0813 00:23:06.289867 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:08.178313 kubelet[2478]: E0813 00:23:08.178259 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xdctl" podUID="4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77" Aug 13 00:23:08.486607 containerd[1443]: time="2025-08-13T00:23:08.486543472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:08.487167 containerd[1443]: time="2025-08-13T00:23:08.487133636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:23:08.488272 containerd[1443]: time="2025-08-13T00:23:08.488237363Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:08.490322 containerd[1443]: time="2025-08-13T00:23:08.490283297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:08.491339 containerd[1443]: time="2025-08-13T00:23:08.491124863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.202385858s" Aug 13 00:23:08.491339 containerd[1443]: time="2025-08-13T00:23:08.491164143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:23:08.494952 containerd[1443]: time="2025-08-13T00:23:08.494916529Z" level=info msg="CreateContainer within sandbox \"8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:23:08.520049 containerd[1443]: time="2025-08-13T00:23:08.520006300Z" level=info msg="CreateContainer within sandbox \"8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e\"" Aug 13 00:23:08.521147 containerd[1443]: time="2025-08-13T00:23:08.520802185Z" level=info msg="StartContainer for \"44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e\"" Aug 13 00:23:08.550327 systemd[1]: Started cri-containerd-44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e.scope - libcontainer container 44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e. Aug 13 00:23:08.606382 containerd[1443]: time="2025-08-13T00:23:08.606317448Z" level=info msg="StartContainer for \"44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e\" returns successfully" Aug 13 00:23:09.387286 systemd[1]: cri-containerd-44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e.scope: Deactivated successfully. Aug 13 00:23:09.397577 kubelet[2478]: I0813 00:23:09.397531 2478 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:23:09.424101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e-rootfs.mount: Deactivated successfully. Aug 13 00:23:09.442578 systemd[1]: Created slice kubepods-besteffort-pod184c1568_3e9d_42d7_9996_6febf66e9f97.slice - libcontainer container kubepods-besteffort-pod184c1568_3e9d_42d7_9996_6febf66e9f97.slice. Aug 13 00:23:09.455913 systemd[1]: Created slice kubepods-besteffort-pod5e9c7520_456a_4fb4_9e17_2a8c3cda47aa.slice - libcontainer container kubepods-besteffort-pod5e9c7520_456a_4fb4_9e17_2a8c3cda47aa.slice. Aug 13 00:23:09.470461 systemd[1]: Created slice kubepods-besteffort-podc7a20528_2e49_4684_bead_e1d4d74e5a78.slice - libcontainer container kubepods-besteffort-podc7a20528_2e49_4684_bead_e1d4d74e5a78.slice. Aug 13 00:23:09.479480 systemd[1]: Created slice kubepods-burstable-pode841be3f_a724_4526_a7fd_880807a1af6d.slice - libcontainer container kubepods-burstable-pode841be3f_a724_4526_a7fd_880807a1af6d.slice. Aug 13 00:23:09.489663 systemd[1]: Created slice kubepods-burstable-poddf5c37b4_17cf_442e_8788_172a4eba1e3f.slice - libcontainer container kubepods-burstable-poddf5c37b4_17cf_442e_8788_172a4eba1e3f.slice. Aug 13 00:23:09.495287 systemd[1]: Created slice kubepods-besteffort-pod1d1ad45a_7073_4dfd_8cbf_ad24b938295e.slice - libcontainer container kubepods-besteffort-pod1d1ad45a_7073_4dfd_8cbf_ad24b938295e.slice. Aug 13 00:23:09.500541 systemd[1]: Created slice kubepods-besteffort-pod087e83b1_0f73_4043_8aea_dc61a1b40e0e.slice - libcontainer container kubepods-besteffort-pod087e83b1_0f73_4043_8aea_dc61a1b40e0e.slice. Aug 13 00:23:09.558013 kubelet[2478]: I0813 00:23:09.557910 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5c37b4-17cf-442e-8788-172a4eba1e3f-config-volume\") pod \"coredns-668d6bf9bc-wjkgc\" (UID: \"df5c37b4-17cf-442e-8788-172a4eba1e3f\") " pod="kube-system/coredns-668d6bf9bc-wjkgc" Aug 13 00:23:09.558013 kubelet[2478]: I0813 00:23:09.557961 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjhtb\" (UniqueName: \"kubernetes.io/projected/5e9c7520-456a-4fb4-9e17-2a8c3cda47aa-kube-api-access-tjhtb\") pod \"goldmane-768f4c5c69-9pbbd\" (UID: \"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa\") " pod="calico-system/goldmane-768f4c5c69-9pbbd" Aug 13 00:23:09.558013 kubelet[2478]: I0813 00:23:09.557985 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e841be3f-a724-4526-a7fd-880807a1af6d-config-volume\") pod \"coredns-668d6bf9bc-hc28g\" (UID: \"e841be3f-a724-4526-a7fd-880807a1af6d\") " pod="kube-system/coredns-668d6bf9bc-hc28g" Aug 13 00:23:09.558013 kubelet[2478]: I0813 00:23:09.558006 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5e9c7520-456a-4fb4-9e17-2a8c3cda47aa-goldmane-key-pair\") pod \"goldmane-768f4c5c69-9pbbd\" (UID: \"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa\") " pod="calico-system/goldmane-768f4c5c69-9pbbd" Aug 13 00:23:09.558013 kubelet[2478]: I0813 00:23:09.558027 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25h8d\" (UniqueName: \"kubernetes.io/projected/c7a20528-2e49-4684-bead-e1d4d74e5a78-kube-api-access-25h8d\") pod \"calico-apiserver-7d4b446f75-xbvj6\" (UID: \"c7a20528-2e49-4684-bead-e1d4d74e5a78\") " pod="calico-apiserver/calico-apiserver-7d4b446f75-xbvj6" Aug 13 00:23:09.558795 kubelet[2478]: I0813 00:23:09.558043 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmj9\" (UniqueName: \"kubernetes.io/projected/e841be3f-a724-4526-a7fd-880807a1af6d-kube-api-access-9mmj9\") pod \"coredns-668d6bf9bc-hc28g\" (UID: \"e841be3f-a724-4526-a7fd-880807a1af6d\") " pod="kube-system/coredns-668d6bf9bc-hc28g" Aug 13 00:23:09.558795 kubelet[2478]: I0813 00:23:09.558060 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpf5\" (UniqueName: \"kubernetes.io/projected/df5c37b4-17cf-442e-8788-172a4eba1e3f-kube-api-access-zlpf5\") pod \"coredns-668d6bf9bc-wjkgc\" (UID: \"df5c37b4-17cf-442e-8788-172a4eba1e3f\") " pod="kube-system/coredns-668d6bf9bc-wjkgc" Aug 13 00:23:09.558795 kubelet[2478]: I0813 00:23:09.558106 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9c7520-456a-4fb4-9e17-2a8c3cda47aa-config\") pod \"goldmane-768f4c5c69-9pbbd\" (UID: \"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa\") " pod="calico-system/goldmane-768f4c5c69-9pbbd" Aug 13 00:23:09.558795 kubelet[2478]: I0813 00:23:09.558124 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e9c7520-456a-4fb4-9e17-2a8c3cda47aa-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-9pbbd\" (UID: \"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa\") " pod="calico-system/goldmane-768f4c5c69-9pbbd" Aug 13 00:23:09.558795 kubelet[2478]: I0813 00:23:09.558213 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7a20528-2e49-4684-bead-e1d4d74e5a78-calico-apiserver-certs\") pod \"calico-apiserver-7d4b446f75-xbvj6\" (UID: \"c7a20528-2e49-4684-bead-e1d4d74e5a78\") " pod="calico-apiserver/calico-apiserver-7d4b446f75-xbvj6" Aug 13 00:23:09.558930 kubelet[2478]: I0813 00:23:09.558258 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-backend-key-pair\") pod \"whisker-745fc7b96f-2lfkq\" (UID: \"184c1568-3e9d-42d7-9996-6febf66e9f97\") " pod="calico-system/whisker-745fc7b96f-2lfkq" Aug 13 00:23:09.558930 kubelet[2478]: I0813 00:23:09.558294 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4ls\" (UniqueName: \"kubernetes.io/projected/184c1568-3e9d-42d7-9996-6febf66e9f97-kube-api-access-mn4ls\") pod \"whisker-745fc7b96f-2lfkq\" (UID: \"184c1568-3e9d-42d7-9996-6febf66e9f97\") " pod="calico-system/whisker-745fc7b96f-2lfkq" Aug 13 00:23:09.558930 kubelet[2478]: I0813 00:23:09.558446 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-ca-bundle\") pod \"whisker-745fc7b96f-2lfkq\" (UID: \"184c1568-3e9d-42d7-9996-6febf66e9f97\") " pod="calico-system/whisker-745fc7b96f-2lfkq" Aug 13 00:23:09.561460 containerd[1443]: time="2025-08-13T00:23:09.561400174Z" level=info msg="shim disconnected" id=44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e namespace=k8s.io Aug 13 00:23:09.561460 containerd[1443]: time="2025-08-13T00:23:09.561462174Z" level=warning msg="cleaning up after shim disconnected" id=44b83d4dd59681ad457979f9f5c1a2b1936da465c254e5534d72961059db7c5e namespace=k8s.io Aug 13 00:23:09.561814 containerd[1443]: time="2025-08-13T00:23:09.561471214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:09.659304 kubelet[2478]: I0813 00:23:09.658842 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/087e83b1-0f73-4043-8aea-dc61a1b40e0e-tigera-ca-bundle\") pod \"calico-kube-controllers-7c647fbdd7-hsbbn\" (UID: \"087e83b1-0f73-4043-8aea-dc61a1b40e0e\") " pod="calico-system/calico-kube-controllers-7c647fbdd7-hsbbn" Aug 13 00:23:09.659304 kubelet[2478]: I0813 00:23:09.658887 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjnr\" (UniqueName: \"kubernetes.io/projected/087e83b1-0f73-4043-8aea-dc61a1b40e0e-kube-api-access-ljjnr\") pod \"calico-kube-controllers-7c647fbdd7-hsbbn\" (UID: \"087e83b1-0f73-4043-8aea-dc61a1b40e0e\") " pod="calico-system/calico-kube-controllers-7c647fbdd7-hsbbn" Aug 13 00:23:09.659304 kubelet[2478]: I0813 00:23:09.658908 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chxp4\" (UniqueName: \"kubernetes.io/projected/1d1ad45a-7073-4dfd-8cbf-ad24b938295e-kube-api-access-chxp4\") pod \"calico-apiserver-7d4b446f75-tvpqt\" (UID: \"1d1ad45a-7073-4dfd-8cbf-ad24b938295e\") " pod="calico-apiserver/calico-apiserver-7d4b446f75-tvpqt" Aug 13 00:23:09.659304 kubelet[2478]: I0813 00:23:09.658980 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1d1ad45a-7073-4dfd-8cbf-ad24b938295e-calico-apiserver-certs\") pod \"calico-apiserver-7d4b446f75-tvpqt\" (UID: \"1d1ad45a-7073-4dfd-8cbf-ad24b938295e\") " pod="calico-apiserver/calico-apiserver-7d4b446f75-tvpqt" Aug 13 00:23:09.771106 containerd[1443]: time="2025-08-13T00:23:09.769962317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745fc7b96f-2lfkq,Uid:184c1568-3e9d-42d7-9996-6febf66e9f97,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:09.771106 containerd[1443]: time="2025-08-13T00:23:09.770257759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9pbbd,Uid:5e9c7520-456a-4fb4-9e17-2a8c3cda47aa,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:09.775749 containerd[1443]: time="2025-08-13T00:23:09.775697755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-xbvj6,Uid:c7a20528-2e49-4684-bead-e1d4d74e5a78,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:23:09.783815 kubelet[2478]: E0813 00:23:09.783771 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:09.785550 containerd[1443]: time="2025-08-13T00:23:09.784471613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hc28g,Uid:e841be3f-a724-4526-a7fd-880807a1af6d,Namespace:kube-system,Attempt:0,}" Aug 13 00:23:09.793531 kubelet[2478]: E0813 00:23:09.793474 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:09.794571 containerd[1443]: time="2025-08-13T00:23:09.794421759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjkgc,Uid:df5c37b4-17cf-442e-8788-172a4eba1e3f,Namespace:kube-system,Attempt:0,}" Aug 13 00:23:09.799757 containerd[1443]: time="2025-08-13T00:23:09.799387472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-tvpqt,Uid:1d1ad45a-7073-4dfd-8cbf-ad24b938295e,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:23:09.803776 containerd[1443]: time="2025-08-13T00:23:09.803727380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c647fbdd7-hsbbn,Uid:087e83b1-0f73-4043-8aea-dc61a1b40e0e,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:10.207974 systemd[1]: Created slice kubepods-besteffort-pod4fd3051e_ecbd_4cf8_b840_da7c4d8d1f77.slice - libcontainer container kubepods-besteffort-pod4fd3051e_ecbd_4cf8_b840_da7c4d8d1f77.slice. Aug 13 00:23:10.221751 containerd[1443]: time="2025-08-13T00:23:10.221712833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdctl,Uid:4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:10.391348 containerd[1443]: time="2025-08-13T00:23:10.391309168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:23:10.522495 containerd[1443]: time="2025-08-13T00:23:10.522189132Z" level=error msg="Failed to destroy network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.525744 containerd[1443]: time="2025-08-13T00:23:10.525693075Z" level=error msg="encountered an error cleaning up failed sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.525888 containerd[1443]: time="2025-08-13T00:23:10.525766755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdctl,Uid:4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.526037 kubelet[2478]: E0813 00:23:10.525994 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.527388 containerd[1443]: time="2025-08-13T00:23:10.527342806Z" level=error msg="Failed to destroy network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.528025 containerd[1443]: time="2025-08-13T00:23:10.527994370Z" level=error msg="encountered an error cleaning up failed sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.528192 containerd[1443]: time="2025-08-13T00:23:10.528166571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hc28g,Uid:e841be3f-a724-4526-a7fd-880807a1af6d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.528778 kubelet[2478]: E0813 00:23:10.528731 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.528861 kubelet[2478]: E0813 00:23:10.528803 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hc28g" Aug 13 00:23:10.528861 kubelet[2478]: E0813 00:23:10.528845 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hc28g" Aug 13 00:23:10.529023 kubelet[2478]: E0813 00:23:10.528990 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hc28g_kube-system(e841be3f-a724-4526-a7fd-880807a1af6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hc28g_kube-system(e841be3f-a724-4526-a7fd-880807a1af6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hc28g" podUID="e841be3f-a724-4526-a7fd-880807a1af6d" Aug 13 00:23:10.530230 kubelet[2478]: E0813 00:23:10.530181 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xdctl" Aug 13 00:23:10.530310 kubelet[2478]: E0813 00:23:10.530236 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xdctl" Aug 13 00:23:10.530310 kubelet[2478]: E0813 00:23:10.530281 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xdctl_calico-system(4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xdctl_calico-system(4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xdctl" podUID="4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77" Aug 13 00:23:10.531268 containerd[1443]: time="2025-08-13T00:23:10.531222631Z" level=error msg="Failed to destroy network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.531425 containerd[1443]: time="2025-08-13T00:23:10.531265791Z" level=error msg="Failed to destroy network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.531807 containerd[1443]: time="2025-08-13T00:23:10.531765154Z" level=error msg="encountered an error cleaning up failed sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.531873 containerd[1443]: time="2025-08-13T00:23:10.531820155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-tvpqt,Uid:1d1ad45a-7073-4dfd-8cbf-ad24b938295e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.532042 containerd[1443]: time="2025-08-13T00:23:10.531993156Z" level=error msg="encountered an error cleaning up failed sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.532101 kubelet[2478]: E0813 00:23:10.532029 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.532142 kubelet[2478]: E0813 00:23:10.532096 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b446f75-tvpqt" Aug 13 00:23:10.532142 kubelet[2478]: E0813 00:23:10.532116 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b446f75-tvpqt" Aug 13 00:23:10.532203 kubelet[2478]: E0813 00:23:10.532159 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4b446f75-tvpqt_calico-apiserver(1d1ad45a-7073-4dfd-8cbf-ad24b938295e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4b446f75-tvpqt_calico-apiserver(1d1ad45a-7073-4dfd-8cbf-ad24b938295e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b446f75-tvpqt" podUID="1d1ad45a-7073-4dfd-8cbf-ad24b938295e" Aug 13 00:23:10.532328 containerd[1443]: time="2025-08-13T00:23:10.532298878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-xbvj6,Uid:c7a20528-2e49-4684-bead-e1d4d74e5a78,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.532478 containerd[1443]: time="2025-08-13T00:23:10.532110476Z" level=error msg="Failed to destroy network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.532800 kubelet[2478]: E0813 00:23:10.532766 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.532857 kubelet[2478]: E0813 00:23:10.532810 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b446f75-xbvj6" Aug 13 00:23:10.532857 kubelet[2478]: E0813 00:23:10.532830 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4b446f75-xbvj6" Aug 13 00:23:10.532916 kubelet[2478]: E0813 00:23:10.532858 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4b446f75-xbvj6_calico-apiserver(c7a20528-2e49-4684-bead-e1d4d74e5a78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4b446f75-xbvj6_calico-apiserver(c7a20528-2e49-4684-bead-e1d4d74e5a78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b446f75-xbvj6" podUID="c7a20528-2e49-4684-bead-e1d4d74e5a78" Aug 13 00:23:10.533127 containerd[1443]: time="2025-08-13T00:23:10.533090843Z" level=error msg="encountered an error cleaning up failed sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.533250 containerd[1443]: time="2025-08-13T00:23:10.533227204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjkgc,Uid:df5c37b4-17cf-442e-8788-172a4eba1e3f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.533514 kubelet[2478]: E0813 00:23:10.533483 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.533583 kubelet[2478]: E0813 00:23:10.533522 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wjkgc" Aug 13 00:23:10.533583 kubelet[2478]: E0813 00:23:10.533537 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wjkgc" Aug 13 00:23:10.533583 kubelet[2478]: E0813 00:23:10.533564 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wjkgc_kube-system(df5c37b4-17cf-442e-8788-172a4eba1e3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wjkgc_kube-system(df5c37b4-17cf-442e-8788-172a4eba1e3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wjkgc" podUID="df5c37b4-17cf-442e-8788-172a4eba1e3f" Aug 13 00:23:10.534675 containerd[1443]: time="2025-08-13T00:23:10.534635693Z" level=error msg="Failed to destroy network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.534978 containerd[1443]: time="2025-08-13T00:23:10.534947775Z" level=error msg="encountered an error cleaning up failed sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.535039 containerd[1443]: time="2025-08-13T00:23:10.535015215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c647fbdd7-hsbbn,Uid:087e83b1-0f73-4043-8aea-dc61a1b40e0e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.535268 kubelet[2478]: E0813 00:23:10.535237 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.535326 kubelet[2478]: E0813 00:23:10.535280 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c647fbdd7-hsbbn" Aug 13 00:23:10.535326 kubelet[2478]: E0813 00:23:10.535301 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c647fbdd7-hsbbn" Aug 13 00:23:10.535390 kubelet[2478]: E0813 00:23:10.535338 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c647fbdd7-hsbbn_calico-system(087e83b1-0f73-4043-8aea-dc61a1b40e0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c647fbdd7-hsbbn_calico-system(087e83b1-0f73-4043-8aea-dc61a1b40e0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c647fbdd7-hsbbn" podUID="087e83b1-0f73-4043-8aea-dc61a1b40e0e" Aug 13 00:23:10.544843 containerd[1443]: time="2025-08-13T00:23:10.544788758Z" level=error msg="Failed to destroy network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.544997 containerd[1443]: time="2025-08-13T00:23:10.544850679Z" level=error msg="Failed to destroy network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.545273 containerd[1443]: time="2025-08-13T00:23:10.545245121Z" level=error msg="encountered an error cleaning up failed sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.545330 containerd[1443]: time="2025-08-13T00:23:10.545297842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9pbbd,Uid:5e9c7520-456a-4fb4-9e17-2a8c3cda47aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.545549 kubelet[2478]: E0813 00:23:10.545509 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.545599 kubelet[2478]: E0813 00:23:10.545573 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9pbbd" Aug 13 00:23:10.545629 kubelet[2478]: E0813 00:23:10.545595 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9pbbd" Aug 13 00:23:10.545652 kubelet[2478]: E0813 00:23:10.545631 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-9pbbd_calico-system(5e9c7520-456a-4fb4-9e17-2a8c3cda47aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-9pbbd_calico-system(5e9c7520-456a-4fb4-9e17-2a8c3cda47aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9pbbd" podUID="5e9c7520-456a-4fb4-9e17-2a8c3cda47aa" Aug 13 00:23:10.545864 containerd[1443]: time="2025-08-13T00:23:10.545759164Z" level=error msg="encountered an error cleaning up failed sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.545997 containerd[1443]: time="2025-08-13T00:23:10.545898405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745fc7b96f-2lfkq,Uid:184c1568-3e9d-42d7-9996-6febf66e9f97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.547153 kubelet[2478]: E0813 00:23:10.547124 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.547208 kubelet[2478]: E0813 00:23:10.547169 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745fc7b96f-2lfkq" Aug 13 00:23:10.547208 kubelet[2478]: E0813 00:23:10.547191 2478 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745fc7b96f-2lfkq" Aug 13 00:23:10.547262 kubelet[2478]: E0813 00:23:10.547226 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-745fc7b96f-2lfkq_calico-system(184c1568-3e9d-42d7-9996-6febf66e9f97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-745fc7b96f-2lfkq_calico-system(184c1568-3e9d-42d7-9996-6febf66e9f97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-745fc7b96f-2lfkq" podUID="184c1568-3e9d-42d7-9996-6febf66e9f97" Aug 13 00:23:11.394132 kubelet[2478]: I0813 00:23:11.392287 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:11.397718 containerd[1443]: time="2025-08-13T00:23:11.397653558Z" level=info msg="StopPodSandbox for \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\"" Aug 13 00:23:11.398050 kubelet[2478]: I0813 00:23:11.397830 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:11.398577 containerd[1443]: time="2025-08-13T00:23:11.398290682Z" level=info msg="Ensure that sandbox b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7 in task-service has been cleanup successfully" Aug 13 00:23:11.398645 containerd[1443]: time="2025-08-13T00:23:11.398603044Z" level=info msg="StopPodSandbox for \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\"" Aug 13 00:23:11.398762 containerd[1443]: time="2025-08-13T00:23:11.398742244Z" level=info msg="Ensure that sandbox 31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856 in task-service has been cleanup successfully" Aug 13 00:23:11.407384 kubelet[2478]: I0813 00:23:11.406886 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:11.412641 containerd[1443]: time="2025-08-13T00:23:11.412590092Z" level=info msg="StopPodSandbox for \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\"" Aug 13 00:23:11.413014 containerd[1443]: time="2025-08-13T00:23:11.412991774Z" level=info msg="Ensure that sandbox 1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143 in task-service has been cleanup successfully" Aug 13 00:23:11.415608 kubelet[2478]: I0813 00:23:11.415580 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:11.416641 containerd[1443]: time="2025-08-13T00:23:11.416604597Z" level=info msg="StopPodSandbox for \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\"" Aug 13 00:23:11.417041 containerd[1443]: time="2025-08-13T00:23:11.417012039Z" level=info msg="Ensure that sandbox d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7 in task-service has been cleanup successfully" Aug 13 00:23:11.421745 kubelet[2478]: I0813 00:23:11.421712 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:11.422826 containerd[1443]: time="2025-08-13T00:23:11.422759356Z" level=info msg="StopPodSandbox for \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\"" Aug 13 00:23:11.422993 containerd[1443]: time="2025-08-13T00:23:11.422955957Z" level=info msg="Ensure that sandbox a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a in task-service has been cleanup successfully" Aug 13 00:23:11.426488 kubelet[2478]: I0813 00:23:11.426107 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:11.426775 containerd[1443]: time="2025-08-13T00:23:11.426743021Z" level=info msg="StopPodSandbox for \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\"" Aug 13 00:23:11.426997 containerd[1443]: time="2025-08-13T00:23:11.426976542Z" level=info msg="Ensure that sandbox db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27 in task-service has been cleanup successfully" Aug 13 00:23:11.430837 kubelet[2478]: I0813 00:23:11.430464 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:11.431173 containerd[1443]: time="2025-08-13T00:23:11.431133728Z" level=info msg="StopPodSandbox for \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\"" Aug 13 00:23:11.431340 containerd[1443]: time="2025-08-13T00:23:11.431316689Z" level=info msg="Ensure that sandbox f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380 in task-service has been cleanup successfully" Aug 13 00:23:11.434533 kubelet[2478]: I0813 00:23:11.434506 2478 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:11.435540 containerd[1443]: time="2025-08-13T00:23:11.435432355Z" level=info msg="StopPodSandbox for \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\"" Aug 13 00:23:11.436353 containerd[1443]: time="2025-08-13T00:23:11.436322721Z" level=info msg="Ensure that sandbox b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c in task-service has been cleanup successfully" Aug 13 00:23:11.484388 containerd[1443]: time="2025-08-13T00:23:11.484312983Z" level=error msg="StopPodSandbox for \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\" failed" error="failed to destroy network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.488287 kubelet[2478]: E0813 00:23:11.488131 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:11.488953 containerd[1443]: time="2025-08-13T00:23:11.488889771Z" level=error msg="StopPodSandbox for \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\" failed" error="failed to destroy network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.489861 kubelet[2478]: E0813 00:23:11.489652 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:11.491283 kubelet[2478]: E0813 00:23:11.491210 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7"} Aug 13 00:23:11.491383 kubelet[2478]: E0813 00:23:11.491321 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e841be3f-a724-4526-a7fd-880807a1af6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.491383 kubelet[2478]: E0813 00:23:11.491351 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e841be3f-a724-4526-a7fd-880807a1af6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hc28g" podUID="e841be3f-a724-4526-a7fd-880807a1af6d" Aug 13 00:23:11.492155 kubelet[2478]: E0813 00:23:11.491944 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856"} Aug 13 00:23:11.492155 kubelet[2478]: E0813 00:23:11.492008 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.492155 kubelet[2478]: E0813 00:23:11.492034 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9pbbd" podUID="5e9c7520-456a-4fb4-9e17-2a8c3cda47aa" Aug 13 00:23:11.506941 containerd[1443]: time="2025-08-13T00:23:11.505433756Z" level=error msg="StopPodSandbox for \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\" failed" error="failed to destroy network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.507447 kubelet[2478]: E0813 00:23:11.507288 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:11.507447 kubelet[2478]: E0813 00:23:11.507350 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a"} Aug 13 00:23:11.507447 kubelet[2478]: E0813 00:23:11.507397 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"087e83b1-0f73-4043-8aea-dc61a1b40e0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.507447 kubelet[2478]: E0813 00:23:11.507420 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"087e83b1-0f73-4043-8aea-dc61a1b40e0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c647fbdd7-hsbbn" podUID="087e83b1-0f73-4043-8aea-dc61a1b40e0e" Aug 13 00:23:11.512302 containerd[1443]: time="2025-08-13T00:23:11.512243758Z" level=error msg="StopPodSandbox for \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\" failed" error="failed to destroy network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.512758 containerd[1443]: time="2025-08-13T00:23:11.512423720Z" level=error msg="StopPodSandbox for \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\" failed" error="failed to destroy network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.512815 kubelet[2478]: E0813 00:23:11.512515 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:11.512815 kubelet[2478]: E0813 00:23:11.512565 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380"} Aug 13 00:23:11.512815 kubelet[2478]: E0813 00:23:11.512576 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:11.512815 kubelet[2478]: E0813 00:23:11.512599 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df5c37b4-17cf-442e-8788-172a4eba1e3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.512815 kubelet[2478]: E0813 00:23:11.512622 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7"} Aug 13 00:23:11.512981 kubelet[2478]: E0813 00:23:11.512652 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7a20528-2e49-4684-bead-e1d4d74e5a78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.512981 kubelet[2478]: E0813 00:23:11.512677 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7a20528-2e49-4684-bead-e1d4d74e5a78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b446f75-xbvj6" podUID="c7a20528-2e49-4684-bead-e1d4d74e5a78" Aug 13 00:23:11.512981 kubelet[2478]: E0813 00:23:11.512623 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df5c37b4-17cf-442e-8788-172a4eba1e3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wjkgc" podUID="df5c37b4-17cf-442e-8788-172a4eba1e3f" Aug 13 00:23:11.518469 containerd[1443]: time="2025-08-13T00:23:11.518419597Z" level=error msg="StopPodSandbox for \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\" failed" error="failed to destroy network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.519201 kubelet[2478]: E0813 00:23:11.519039 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:11.519201 kubelet[2478]: E0813 00:23:11.519102 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c"} Aug 13 00:23:11.519201 kubelet[2478]: E0813 00:23:11.519144 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.519201 kubelet[2478]: E0813 00:23:11.519166 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xdctl" podUID="4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77" Aug 13 00:23:11.523268 containerd[1443]: time="2025-08-13T00:23:11.523215187Z" level=error msg="StopPodSandbox for \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\" failed" error="failed to destroy network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.524508 kubelet[2478]: E0813 00:23:11.524460 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:11.524585 kubelet[2478]: E0813 00:23:11.524515 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27"} Aug 13 00:23:11.524585 kubelet[2478]: E0813 00:23:11.524549 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d1ad45a-7073-4dfd-8cbf-ad24b938295e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.524585 kubelet[2478]: E0813 00:23:11.524570 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d1ad45a-7073-4dfd-8cbf-ad24b938295e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4b446f75-tvpqt" podUID="1d1ad45a-7073-4dfd-8cbf-ad24b938295e" Aug 13 00:23:11.531059 containerd[1443]: time="2025-08-13T00:23:11.531015516Z" level=error msg="StopPodSandbox for \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\" failed" error="failed to destroy network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.531442 kubelet[2478]: E0813 00:23:11.531388 2478 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:11.531758 kubelet[2478]: E0813 00:23:11.531454 2478 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143"} Aug 13 00:23:11.531758 kubelet[2478]: E0813 00:23:11.531490 2478 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"184c1568-3e9d-42d7-9996-6febf66e9f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.531758 kubelet[2478]: E0813 00:23:11.531511 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"184c1568-3e9d-42d7-9996-6febf66e9f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-745fc7b96f-2lfkq" podUID="184c1568-3e9d-42d7-9996-6febf66e9f97" Aug 13 00:23:13.697213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882173990.mount: Deactivated successfully. Aug 13 00:23:13.861286 containerd[1443]: time="2025-08-13T00:23:13.861215321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:13.862642 containerd[1443]: time="2025-08-13T00:23:13.862598690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:23:13.863806 containerd[1443]: time="2025-08-13T00:23:13.863757217Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:13.865848 containerd[1443]: time="2025-08-13T00:23:13.865785469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:13.866475 containerd[1443]: time="2025-08-13T00:23:13.866423513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.474369341s" Aug 13 00:23:13.866475 containerd[1443]: time="2025-08-13T00:23:13.866454433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:23:13.876992 containerd[1443]: time="2025-08-13T00:23:13.875882329Z" level=info msg="CreateContainer within sandbox \"8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:23:13.899349 containerd[1443]: time="2025-08-13T00:23:13.899299270Z" level=info msg="CreateContainer within sandbox \"8128d7400e17d5a953296d6a8b659382f22deec102cbf21db417eb511d1632ba\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f0f2bb672bd3e7668b3704378274a5040fa91a627561b675118cbb572677e722\"" Aug 13 00:23:13.900026 containerd[1443]: time="2025-08-13T00:23:13.899989754Z" level=info msg="StartContainer for \"f0f2bb672bd3e7668b3704378274a5040fa91a627561b675118cbb572677e722\"" Aug 13 00:23:13.948270 systemd[1]: Started cri-containerd-f0f2bb672bd3e7668b3704378274a5040fa91a627561b675118cbb572677e722.scope - libcontainer container f0f2bb672bd3e7668b3704378274a5040fa91a627561b675118cbb572677e722. Aug 13 00:23:13.975499 containerd[1443]: time="2025-08-13T00:23:13.975434606Z" level=info msg="StartContainer for \"f0f2bb672bd3e7668b3704378274a5040fa91a627561b675118cbb572677e722\" returns successfully" Aug 13 00:23:14.188443 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:23:14.188557 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:23:14.331980 containerd[1443]: time="2025-08-13T00:23:14.331928977Z" level=info msg="StopPodSandbox for \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\"" Aug 13 00:23:14.469310 kubelet[2478]: I0813 00:23:14.469236 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dw7dr" podStartSLOduration=1.765807774 podStartE2EDuration="13.469216901s" podCreationTimestamp="2025-08-13 00:23:01 +0000 UTC" firstStartedPulling="2025-08-13 00:23:02.16379043 +0000 UTC m=+32.068886985" lastFinishedPulling="2025-08-13 00:23:13.867199557 +0000 UTC m=+43.772296112" observedRunningTime="2025-08-13 00:23:14.468677937 +0000 UTC m=+44.373774492" watchObservedRunningTime="2025-08-13 00:23:14.469216901 +0000 UTC m=+44.374313456" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.452 [INFO][3783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.453 [INFO][3783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" iface="eth0" netns="/var/run/netns/cni-3ea49355-8f12-6267-ce6e-3bb0caf5cd07" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.453 [INFO][3783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" iface="eth0" netns="/var/run/netns/cni-3ea49355-8f12-6267-ce6e-3bb0caf5cd07" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.455 [INFO][3783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" iface="eth0" netns="/var/run/netns/cni-3ea49355-8f12-6267-ce6e-3bb0caf5cd07" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.455 [INFO][3783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.456 [INFO][3783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.624 [INFO][3792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.624 [INFO][3792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.624 [INFO][3792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.642 [WARNING][3792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.643 [INFO][3792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.645 [INFO][3792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:14.649425 containerd[1443]: 2025-08-13 00:23:14.647 [INFO][3783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:14.649886 containerd[1443]: time="2025-08-13T00:23:14.649520276Z" level=info msg="TearDown network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\" successfully" Aug 13 00:23:14.649886 containerd[1443]: time="2025-08-13T00:23:14.649549316Z" level=info msg="StopPodSandbox for \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\" returns successfully" Aug 13 00:23:14.696561 systemd[1]: run-netns-cni\x2d3ea49355\x2d8f12\x2d6267\x2dce6e\x2d3bb0caf5cd07.mount: Deactivated successfully. Aug 13 00:23:14.806112 kubelet[2478]: I0813 00:23:14.806009 2478 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-ca-bundle\") pod \"184c1568-3e9d-42d7-9996-6febf66e9f97\" (UID: \"184c1568-3e9d-42d7-9996-6febf66e9f97\") " Aug 13 00:23:14.806112 kubelet[2478]: I0813 00:23:14.806063 2478 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-backend-key-pair\") pod \"184c1568-3e9d-42d7-9996-6febf66e9f97\" (UID: \"184c1568-3e9d-42d7-9996-6febf66e9f97\") " Aug 13 00:23:14.806112 kubelet[2478]: I0813 00:23:14.806122 2478 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn4ls\" (UniqueName: \"kubernetes.io/projected/184c1568-3e9d-42d7-9996-6febf66e9f97-kube-api-access-mn4ls\") pod \"184c1568-3e9d-42d7-9996-6febf66e9f97\" (UID: \"184c1568-3e9d-42d7-9996-6febf66e9f97\") " Aug 13 00:23:14.806528 kubelet[2478]: I0813 00:23:14.806471 2478 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "184c1568-3e9d-42d7-9996-6febf66e9f97" (UID: "184c1568-3e9d-42d7-9996-6febf66e9f97"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:23:14.819102 systemd[1]: var-lib-kubelet-pods-184c1568\x2d3e9d\x2d42d7\x2d9996\x2d6febf66e9f97-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:23:14.819572 kubelet[2478]: I0813 00:23:14.819400 2478 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "184c1568-3e9d-42d7-9996-6febf66e9f97" (UID: "184c1568-3e9d-42d7-9996-6febf66e9f97"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:23:14.819572 kubelet[2478]: I0813 00:23:14.819509 2478 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184c1568-3e9d-42d7-9996-6febf66e9f97-kube-api-access-mn4ls" (OuterVolumeSpecName: "kube-api-access-mn4ls") pod "184c1568-3e9d-42d7-9996-6febf66e9f97" (UID: "184c1568-3e9d-42d7-9996-6febf66e9f97"). InnerVolumeSpecName "kube-api-access-mn4ls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:23:14.822460 systemd[1]: var-lib-kubelet-pods-184c1568\x2d3e9d\x2d42d7\x2d9996\x2d6febf66e9f97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmn4ls.mount: Deactivated successfully. Aug 13 00:23:14.906795 kubelet[2478]: I0813 00:23:14.906677 2478 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 00:23:14.906795 kubelet[2478]: I0813 00:23:14.906713 2478 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mn4ls\" (UniqueName: \"kubernetes.io/projected/184c1568-3e9d-42d7-9996-6febf66e9f97-kube-api-access-mn4ls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:23:14.906795 kubelet[2478]: I0813 00:23:14.906723 2478 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/184c1568-3e9d-42d7-9996-6febf66e9f97-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 00:23:15.458163 systemd[1]: Removed slice kubepods-besteffort-pod184c1568_3e9d_42d7_9996_6febf66e9f97.slice - libcontainer container kubepods-besteffort-pod184c1568_3e9d_42d7_9996_6febf66e9f97.slice. Aug 13 00:23:15.533656 systemd[1]: Created slice kubepods-besteffort-pod3507aceb_01d2_4c89_894e_2459d29fa345.slice - libcontainer container kubepods-besteffort-pod3507aceb_01d2_4c89_894e_2459d29fa345.slice. Aug 13 00:23:15.611508 kubelet[2478]: I0813 00:23:15.611450 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3507aceb-01d2-4c89-894e-2459d29fa345-whisker-ca-bundle\") pod \"whisker-59799b8d8c-x6dtj\" (UID: \"3507aceb-01d2-4c89-894e-2459d29fa345\") " pod="calico-system/whisker-59799b8d8c-x6dtj" Aug 13 00:23:15.611508 kubelet[2478]: I0813 00:23:15.611519 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3507aceb-01d2-4c89-894e-2459d29fa345-whisker-backend-key-pair\") pod \"whisker-59799b8d8c-x6dtj\" (UID: \"3507aceb-01d2-4c89-894e-2459d29fa345\") " pod="calico-system/whisker-59799b8d8c-x6dtj" Aug 13 00:23:15.611876 kubelet[2478]: I0813 00:23:15.611538 2478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28w7j\" (UniqueName: \"kubernetes.io/projected/3507aceb-01d2-4c89-894e-2459d29fa345-kube-api-access-28w7j\") pod \"whisker-59799b8d8c-x6dtj\" (UID: \"3507aceb-01d2-4c89-894e-2459d29fa345\") " pod="calico-system/whisker-59799b8d8c-x6dtj" Aug 13 00:23:15.825284 kernel: bpftool[3990]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:23:15.839093 containerd[1443]: time="2025-08-13T00:23:15.839031735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59799b8d8c-x6dtj,Uid:3507aceb-01d2-4c89-894e-2459d29fa345,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:16.017918 systemd-networkd[1383]: vxlan.calico: Link UP Aug 13 00:23:16.017934 systemd-networkd[1383]: vxlan.calico: Gained carrier Aug 13 00:23:16.117944 systemd-networkd[1383]: cali26a614fd173: Link UP Aug 13 00:23:16.118222 systemd-networkd[1383]: cali26a614fd173: Gained carrier Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:15.988 [INFO][3993] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59799b8d8c--x6dtj-eth0 whisker-59799b8d8c- calico-system 3507aceb-01d2-4c89-894e-2459d29fa345 906 0 2025-08-13 00:23:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59799b8d8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59799b8d8c-x6dtj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali26a614fd173 [] [] }} ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:15.989 [INFO][3993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.046 [INFO][4020] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" HandleID="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Workload="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.048 [INFO][4020] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" HandleID="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Workload="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59799b8d8c-x6dtj", "timestamp":"2025-08-13 00:23:16.046409877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.048 [INFO][4020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.048 [INFO][4020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.048 [INFO][4020] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.074 [INFO][4020] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.086 [INFO][4020] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.091 [INFO][4020] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.094 [INFO][4020] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.096 [INFO][4020] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.096 [INFO][4020] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.099 [INFO][4020] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.103 [INFO][4020] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.111 [INFO][4020] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.111 [INFO][4020] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" host="localhost" Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.111 [INFO][4020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:16.138058 containerd[1443]: 2025-08-13 00:23:16.111 [INFO][4020] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" HandleID="k8s-pod-network.c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Workload="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" Aug 13 00:23:16.138670 containerd[1443]: 2025-08-13 00:23:16.114 [INFO][3993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59799b8d8c--x6dtj-eth0", GenerateName:"whisker-59799b8d8c-", Namespace:"calico-system", SelfLink:"", UID:"3507aceb-01d2-4c89-894e-2459d29fa345", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59799b8d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59799b8d8c-x6dtj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26a614fd173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:16.138670 containerd[1443]: 2025-08-13 00:23:16.114 [INFO][3993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" Aug 13 00:23:16.138670 containerd[1443]: 2025-08-13 00:23:16.114 [INFO][3993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26a614fd173 ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" Aug 13 00:23:16.138670 containerd[1443]: 2025-08-13 00:23:16.118 [INFO][3993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" Aug 13 00:23:16.138670 containerd[1443]: 2025-08-13 00:23:16.120 [INFO][3993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59799b8d8c--x6dtj-eth0", GenerateName:"whisker-59799b8d8c-", Namespace:"calico-system", SelfLink:"", UID:"3507aceb-01d2-4c89-894e-2459d29fa345", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59799b8d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b", Pod:"whisker-59799b8d8c-x6dtj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26a614fd173", MAC:"fa:7d:cc:e9:d0:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:16.138670 containerd[1443]: 2025-08-13 00:23:16.135 [INFO][3993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b" Namespace="calico-system" Pod="whisker-59799b8d8c-x6dtj" WorkloadEndpoint="localhost-k8s-whisker--59799b8d8c--x6dtj-eth0" Aug 13 00:23:16.182665 kubelet[2478]: I0813 00:23:16.182472 2478 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="184c1568-3e9d-42d7-9996-6febf66e9f97" path="/var/lib/kubelet/pods/184c1568-3e9d-42d7-9996-6febf66e9f97/volumes" Aug 13 00:23:16.187660 containerd[1443]: time="2025-08-13T00:23:16.187398948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:16.187660 containerd[1443]: time="2025-08-13T00:23:16.187481709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:16.187660 containerd[1443]: time="2025-08-13T00:23:16.187494509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:16.187848 containerd[1443]: time="2025-08-13T00:23:16.187596749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:16.219300 systemd[1]: Started cri-containerd-c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b.scope - libcontainer container c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b. Aug 13 00:23:16.237585 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:16.268212 containerd[1443]: time="2025-08-13T00:23:16.268164201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59799b8d8c-x6dtj,Uid:3507aceb-01d2-4c89-894e-2459d29fa345,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b\"" Aug 13 00:23:16.270513 containerd[1443]: time="2025-08-13T00:23:16.270407854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:23:17.205686 containerd[1443]: time="2025-08-13T00:23:17.205021074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:17.205686 containerd[1443]: time="2025-08-13T00:23:17.205591797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:23:17.206433 containerd[1443]: time="2025-08-13T00:23:17.206396521Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:17.209506 containerd[1443]: time="2025-08-13T00:23:17.209457618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:17.210240 containerd[1443]: time="2025-08-13T00:23:17.210206822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 939.761328ms" Aug 13 00:23:17.210288 containerd[1443]: time="2025-08-13T00:23:17.210247342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:23:17.212430 containerd[1443]: time="2025-08-13T00:23:17.212348674Z" level=info msg="CreateContainer within sandbox \"c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:23:17.245407 containerd[1443]: time="2025-08-13T00:23:17.245261895Z" level=info msg="CreateContainer within sandbox \"c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c3ab86cc39ec006aea8830df8583fec75e6684264624a5579fb3a8f3e8606b42\"" Aug 13 00:23:17.246179 containerd[1443]: time="2025-08-13T00:23:17.245791098Z" level=info msg="StartContainer for \"c3ab86cc39ec006aea8830df8583fec75e6684264624a5579fb3a8f3e8606b42\"" Aug 13 00:23:17.291328 systemd[1]: Started cri-containerd-c3ab86cc39ec006aea8830df8583fec75e6684264624a5579fb3a8f3e8606b42.scope - libcontainer container c3ab86cc39ec006aea8830df8583fec75e6684264624a5579fb3a8f3e8606b42. Aug 13 00:23:17.425594 containerd[1443]: time="2025-08-13T00:23:17.425375565Z" level=info msg="StartContainer for \"c3ab86cc39ec006aea8830df8583fec75e6684264624a5579fb3a8f3e8606b42\" returns successfully" Aug 13 00:23:17.427307 containerd[1443]: time="2025-08-13T00:23:17.426780733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:23:17.947437 systemd-networkd[1383]: cali26a614fd173: Gained IPv6LL Aug 13 00:23:18.075318 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL Aug 13 00:23:18.761358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108256329.mount: Deactivated successfully. Aug 13 00:23:18.783591 containerd[1443]: time="2025-08-13T00:23:18.783525109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:18.784208 containerd[1443]: time="2025-08-13T00:23:18.784175473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:23:18.787940 containerd[1443]: time="2025-08-13T00:23:18.787892533Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:18.790520 containerd[1443]: time="2025-08-13T00:23:18.790476347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:18.791711 containerd[1443]: time="2025-08-13T00:23:18.791306671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.364482658s" Aug 13 00:23:18.791711 containerd[1443]: time="2025-08-13T00:23:18.791354392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:23:18.793700 containerd[1443]: time="2025-08-13T00:23:18.793637524Z" level=info msg="CreateContainer within sandbox \"c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:23:18.807635 containerd[1443]: time="2025-08-13T00:23:18.807489399Z" level=info msg="CreateContainer within sandbox \"c4a912445f612075807f2ec9d68ec3c23b41b5815796d19c09bf53ab0a13653b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7343e2d4caccd27c07e2add2cd8e4bf69508f9207a7eb45c42b756015133b442\"" Aug 13 00:23:18.808305 containerd[1443]: time="2025-08-13T00:23:18.808221803Z" level=info msg="StartContainer for \"7343e2d4caccd27c07e2add2cd8e4bf69508f9207a7eb45c42b756015133b442\"" Aug 13 00:23:18.851300 systemd[1]: Started cri-containerd-7343e2d4caccd27c07e2add2cd8e4bf69508f9207a7eb45c42b756015133b442.scope - libcontainer container 7343e2d4caccd27c07e2add2cd8e4bf69508f9207a7eb45c42b756015133b442. Aug 13 00:23:18.905881 containerd[1443]: time="2025-08-13T00:23:18.905824969Z" level=info msg="StartContainer for \"7343e2d4caccd27c07e2add2cd8e4bf69508f9207a7eb45c42b756015133b442\" returns successfully" Aug 13 00:23:19.482881 kubelet[2478]: I0813 00:23:19.480439 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59799b8d8c-x6dtj" podStartSLOduration=1.958341598 podStartE2EDuration="4.480421661s" podCreationTimestamp="2025-08-13 00:23:15 +0000 UTC" firstStartedPulling="2025-08-13 00:23:16.270189053 +0000 UTC m=+46.175285608" lastFinishedPulling="2025-08-13 00:23:18.792269116 +0000 UTC m=+48.697365671" observedRunningTime="2025-08-13 00:23:19.477753007 +0000 UTC m=+49.382849522" watchObservedRunningTime="2025-08-13 00:23:19.480421661 +0000 UTC m=+49.385518296" Aug 13 00:23:22.179363 containerd[1443]: time="2025-08-13T00:23:22.179030552Z" level=info msg="StopPodSandbox for \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\"" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.230 [INFO][4250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.231 [INFO][4250] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" iface="eth0" netns="/var/run/netns/cni-635a937c-7a18-c131-72f4-9a82b4fa383b" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.231 [INFO][4250] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" iface="eth0" netns="/var/run/netns/cni-635a937c-7a18-c131-72f4-9a82b4fa383b" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.231 [INFO][4250] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" iface="eth0" netns="/var/run/netns/cni-635a937c-7a18-c131-72f4-9a82b4fa383b" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.231 [INFO][4250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.231 [INFO][4250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.253 [INFO][4259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.254 [INFO][4259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.254 [INFO][4259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.268 [WARNING][4259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.269 [INFO][4259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.271 [INFO][4259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:22.275168 containerd[1443]: 2025-08-13 00:23:22.273 [INFO][4250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:22.275725 containerd[1443]: time="2025-08-13T00:23:22.275351958Z" level=info msg="TearDown network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\" successfully" Aug 13 00:23:22.275725 containerd[1443]: time="2025-08-13T00:23:22.275379278Z" level=info msg="StopPodSandbox for \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\" returns successfully" Aug 13 00:23:22.277937 systemd[1]: run-netns-cni\x2d635a937c\x2d7a18\x2dc131\x2d72f4\x2d9a82b4fa383b.mount: Deactivated successfully. Aug 13 00:23:22.278819 containerd[1443]: time="2025-08-13T00:23:22.278486733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9pbbd,Uid:5e9c7520-456a-4fb4-9e17-2a8c3cda47aa,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:22.429740 systemd-networkd[1383]: cali14e8fad0ee4: Link UP Aug 13 00:23:22.429895 systemd-networkd[1383]: cali14e8fad0ee4: Gained carrier Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.347 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0 goldmane-768f4c5c69- calico-system 5e9c7520-456a-4fb4-9e17-2a8c3cda47aa 943 0 2025-08-13 00:23:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-9pbbd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali14e8fad0ee4 [] [] }} ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.347 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.385 [INFO][4283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" HandleID="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.386 [INFO][4283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" HandleID="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d56a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-9pbbd", "timestamp":"2025-08-13 00:23:22.385884874 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.386 [INFO][4283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.386 [INFO][4283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.386 [INFO][4283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.397 [INFO][4283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.404 [INFO][4283] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.409 [INFO][4283] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.411 [INFO][4283] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.413 [INFO][4283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.413 [INFO][4283] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.415 [INFO][4283] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2 Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.420 [INFO][4283] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.425 [INFO][4283] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.425 [INFO][4283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" host="localhost" Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.425 [INFO][4283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:22.454113 containerd[1443]: 2025-08-13 00:23:22.425 [INFO][4283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" HandleID="k8s-pod-network.04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.454666 containerd[1443]: 2025-08-13 00:23:22.427 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-9pbbd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14e8fad0ee4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:22.454666 containerd[1443]: 2025-08-13 00:23:22.428 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.454666 containerd[1443]: 2025-08-13 00:23:22.428 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14e8fad0ee4 ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.454666 containerd[1443]: 2025-08-13 00:23:22.429 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.454666 containerd[1443]: 2025-08-13 00:23:22.430 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2", Pod:"goldmane-768f4c5c69-9pbbd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14e8fad0ee4", MAC:"ee:a1:eb:f2:5e:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:22.454666 containerd[1443]: 2025-08-13 00:23:22.446 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2" Namespace="calico-system" Pod="goldmane-768f4c5c69-9pbbd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:22.486360 containerd[1443]: time="2025-08-13T00:23:22.485862658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:22.486360 containerd[1443]: time="2025-08-13T00:23:22.486317940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:22.486360 containerd[1443]: time="2025-08-13T00:23:22.486331340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:22.486555 containerd[1443]: time="2025-08-13T00:23:22.486431020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:22.508285 systemd[1]: Started cri-containerd-04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2.scope - libcontainer container 04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2. Aug 13 00:23:22.522133 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:22.539616 containerd[1443]: time="2025-08-13T00:23:22.539574968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9pbbd,Uid:5e9c7520-456a-4fb4-9e17-2a8c3cda47aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2\"" Aug 13 00:23:22.543357 containerd[1443]: time="2025-08-13T00:23:22.543316827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:23:23.179708 containerd[1443]: time="2025-08-13T00:23:23.179642697Z" level=info msg="StopPodSandbox for \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\"" Aug 13 00:23:23.180134 containerd[1443]: time="2025-08-13T00:23:23.179765658Z" level=info msg="StopPodSandbox for \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\"" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.262 [INFO][4367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.262 [INFO][4367] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" iface="eth0" netns="/var/run/netns/cni-ff4149ce-3c6a-8516-e618-d1d7515faf5a" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.262 [INFO][4367] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" iface="eth0" netns="/var/run/netns/cni-ff4149ce-3c6a-8516-e618-d1d7515faf5a" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.263 [INFO][4367] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" iface="eth0" netns="/var/run/netns/cni-ff4149ce-3c6a-8516-e618-d1d7515faf5a" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.263 [INFO][4367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.263 [INFO][4367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.370 [INFO][4383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.370 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.370 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.383 [WARNING][4383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.384 [INFO][4383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.386 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:23.391451 containerd[1443]: 2025-08-13 00:23:23.388 [INFO][4367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:23.391451 containerd[1443]: time="2025-08-13T00:23:23.391326547Z" level=info msg="TearDown network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\" successfully" Aug 13 00:23:23.391451 containerd[1443]: time="2025-08-13T00:23:23.391369387Z" level=info msg="StopPodSandbox for \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\" returns successfully" Aug 13 00:23:23.394764 containerd[1443]: time="2025-08-13T00:23:23.394711204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c647fbdd7-hsbbn,Uid:087e83b1-0f73-4043-8aea-dc61a1b40e0e,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:23.395201 systemd[1]: run-netns-cni\x2dff4149ce\x2d3c6a\x2d8516\x2de618\x2dd1d7515faf5a.mount: Deactivated successfully. Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.356 [INFO][4373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.357 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" iface="eth0" netns="/var/run/netns/cni-8c34a421-678c-4fe4-cbb3-b76a0a0458e8" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.357 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" iface="eth0" netns="/var/run/netns/cni-8c34a421-678c-4fe4-cbb3-b76a0a0458e8" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.358 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" iface="eth0" netns="/var/run/netns/cni-8c34a421-678c-4fe4-cbb3-b76a0a0458e8" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.358 [INFO][4373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.358 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.385 [INFO][4392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.385 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.386 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.400 [WARNING][4392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.400 [INFO][4392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.401 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:23.407180 containerd[1443]: 2025-08-13 00:23:23.403 [INFO][4373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:23.407583 containerd[1443]: time="2025-08-13T00:23:23.407305946Z" level=info msg="TearDown network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\" successfully" Aug 13 00:23:23.407583 containerd[1443]: time="2025-08-13T00:23:23.407336507Z" level=info msg="StopPodSandbox for \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\" returns successfully" Aug 13 00:23:23.408272 containerd[1443]: time="2025-08-13T00:23:23.408237831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-xbvj6,Uid:c7a20528-2e49-4684-bead-e1d4d74e5a78,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:23:23.410424 systemd[1]: run-netns-cni\x2d8c34a421\x2d678c\x2d4fe4\x2dcbb3\x2db76a0a0458e8.mount: Deactivated successfully. Aug 13 00:23:23.532342 systemd-networkd[1383]: cali2f523c591b5: Link UP Aug 13 00:23:23.532753 systemd-networkd[1383]: cali2f523c591b5: Gained carrier Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.452 [INFO][4402] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0 calico-kube-controllers-7c647fbdd7- calico-system 087e83b1-0f73-4043-8aea-dc61a1b40e0e 951 0 2025-08-13 00:23:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c647fbdd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c647fbdd7-hsbbn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2f523c591b5 [] [] }} ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.452 [INFO][4402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.488 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" HandleID="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.489 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" HandleID="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000343030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c647fbdd7-hsbbn", "timestamp":"2025-08-13 00:23:23.488793111 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.489 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.489 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.489 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.501 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.505 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.510 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.512 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.514 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.515 [INFO][4429] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.516 [INFO][4429] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.520 [INFO][4429] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.527 [INFO][4429] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.527 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" host="localhost" Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.527 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:23.553653 containerd[1443]: 2025-08-13 00:23:23.527 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" HandleID="k8s-pod-network.31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.554259 containerd[1443]: 2025-08-13 00:23:23.530 [INFO][4402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0", GenerateName:"calico-kube-controllers-7c647fbdd7-", Namespace:"calico-system", SelfLink:"", UID:"087e83b1-0f73-4043-8aea-dc61a1b40e0e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c647fbdd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c647fbdd7-hsbbn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f523c591b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:23.554259 containerd[1443]: 2025-08-13 00:23:23.530 [INFO][4402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.554259 containerd[1443]: 2025-08-13 00:23:23.530 [INFO][4402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f523c591b5 ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.554259 containerd[1443]: 2025-08-13 00:23:23.533 [INFO][4402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.554259 containerd[1443]: 2025-08-13 00:23:23.534 [INFO][4402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0", GenerateName:"calico-kube-controllers-7c647fbdd7-", Namespace:"calico-system", SelfLink:"", UID:"087e83b1-0f73-4043-8aea-dc61a1b40e0e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c647fbdd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b", Pod:"calico-kube-controllers-7c647fbdd7-hsbbn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f523c591b5", MAC:"02:8c:60:2b:d7:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:23.554259 containerd[1443]: 2025-08-13 00:23:23.550 [INFO][4402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b" Namespace="calico-system" Pod="calico-kube-controllers-7c647fbdd7-hsbbn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:23.571922 containerd[1443]: time="2025-08-13T00:23:23.571611081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:23.571922 containerd[1443]: time="2025-08-13T00:23:23.571684642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:23.571922 containerd[1443]: time="2025-08-13T00:23:23.571710202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:23.572369 containerd[1443]: time="2025-08-13T00:23:23.572308685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:23.595259 systemd[1]: Started cri-containerd-31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b.scope - libcontainer container 31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b. Aug 13 00:23:23.607791 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:23.635373 systemd-networkd[1383]: cali59e5b6b9c5f: Link UP Aug 13 00:23:23.636426 systemd-networkd[1383]: cali59e5b6b9c5f: Gained carrier Aug 13 00:23:23.649294 containerd[1443]: time="2025-08-13T00:23:23.649192466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c647fbdd7-hsbbn,Uid:087e83b1-0f73-4043-8aea-dc61a1b40e0e,Namespace:calico-system,Attempt:1,} returns sandbox id \"31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b\"" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.470 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0 calico-apiserver-7d4b446f75- calico-apiserver c7a20528-2e49-4684-bead-e1d4d74e5a78 952 0 2025-08-13 00:22:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4b446f75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d4b446f75-xbvj6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59e5b6b9c5f [] [] }} ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.470 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.505 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" HandleID="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.505 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" HandleID="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d4b446f75-xbvj6", "timestamp":"2025-08-13 00:23:23.505179952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.505 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.527 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.527 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.604 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.609 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.613 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.615 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.618 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.618 [INFO][4436] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.620 [INFO][4436] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9 Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.623 [INFO][4436] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.629 [INFO][4436] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.629 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" host="localhost" Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.629 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:23.653574 containerd[1443]: 2025-08-13 00:23:23.629 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" HandleID="k8s-pod-network.64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.654099 containerd[1443]: 2025-08-13 00:23:23.632 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7a20528-2e49-4684-bead-e1d4d74e5a78", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d4b446f75-xbvj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59e5b6b9c5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:23.654099 containerd[1443]: 2025-08-13 00:23:23.632 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.654099 containerd[1443]: 2025-08-13 00:23:23.632 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59e5b6b9c5f ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.654099 containerd[1443]: 2025-08-13 00:23:23.636 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.654099 containerd[1443]: 2025-08-13 00:23:23.637 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7a20528-2e49-4684-bead-e1d4d74e5a78", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9", Pod:"calico-apiserver-7d4b446f75-xbvj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59e5b6b9c5f", MAC:"5a:9d:61:a9:c7:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:23.654099 containerd[1443]: 2025-08-13 00:23:23.649 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-xbvj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:23.676683 containerd[1443]: time="2025-08-13T00:23:23.676473321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:23.676683 containerd[1443]: time="2025-08-13T00:23:23.676585002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:23.676683 containerd[1443]: time="2025-08-13T00:23:23.676615562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:23.677101 containerd[1443]: time="2025-08-13T00:23:23.677042724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:23.700291 systemd[1]: Started cri-containerd-64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9.scope - libcontainer container 64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9. Aug 13 00:23:23.713707 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:23.735800 containerd[1443]: time="2025-08-13T00:23:23.735757935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-xbvj6,Uid:c7a20528-2e49-4684-bead-e1d4d74e5a78,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9\"" Aug 13 00:23:24.027697 systemd-networkd[1383]: cali14e8fad0ee4: Gained IPv6LL Aug 13 00:23:24.180348 containerd[1443]: time="2025-08-13T00:23:24.179216442Z" level=info msg="StopPodSandbox for \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\"" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.285 [INFO][4560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.285 [INFO][4560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" iface="eth0" netns="/var/run/netns/cni-a9f0e99a-ea2f-0ab9-d476-5f4a6275b458" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.285 [INFO][4560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" iface="eth0" netns="/var/run/netns/cni-a9f0e99a-ea2f-0ab9-d476-5f4a6275b458" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.285 [INFO][4560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" iface="eth0" netns="/var/run/netns/cni-a9f0e99a-ea2f-0ab9-d476-5f4a6275b458" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.285 [INFO][4560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.285 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.314 [INFO][4573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.314 [INFO][4573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.315 [INFO][4573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.325 [WARNING][4573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.325 [INFO][4573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.326 [INFO][4573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:24.330811 containerd[1443]: 2025-08-13 00:23:24.328 [INFO][4560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:24.331612 containerd[1443]: time="2025-08-13T00:23:24.331478626Z" level=info msg="TearDown network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\" successfully" Aug 13 00:23:24.331612 containerd[1443]: time="2025-08-13T00:23:24.331515347Z" level=info msg="StopPodSandbox for \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\" returns successfully" Aug 13 00:23:24.332584 containerd[1443]: time="2025-08-13T00:23:24.332553312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdctl,Uid:4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:24.397386 systemd[1]: run-netns-cni\x2da9f0e99a\x2dea2f\x2d0ab9\x2dd476\x2d5f4a6275b458.mount: Deactivated successfully. Aug 13 00:23:24.470166 systemd-networkd[1383]: calia9b43eee59b: Link UP Aug 13 00:23:24.472128 systemd-networkd[1383]: calia9b43eee59b: Gained carrier Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.387 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xdctl-eth0 csi-node-driver- calico-system 4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77 964 0 2025-08-13 00:23:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xdctl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia9b43eee59b [] [] }} ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.387 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.419 [INFO][4596] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" HandleID="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.419 [INFO][4596] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" HandleID="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xdctl", "timestamp":"2025-08-13 00:23:24.419517857 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.419 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.419 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.419 [INFO][4596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.429 [INFO][4596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.434 [INFO][4596] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.440 [INFO][4596] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.443 [INFO][4596] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.448 [INFO][4596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.448 [INFO][4596] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.450 [INFO][4596] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.455 [INFO][4596] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.462 [INFO][4596] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.462 [INFO][4596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" host="localhost" Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.462 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:24.493810 containerd[1443]: 2025-08-13 00:23:24.462 [INFO][4596] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" HandleID="k8s-pod-network.8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.494405 containerd[1443]: 2025-08-13 00:23:24.465 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xdctl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xdctl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9b43eee59b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:24.494405 containerd[1443]: 2025-08-13 00:23:24.466 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.494405 containerd[1443]: 2025-08-13 00:23:24.466 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9b43eee59b ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.494405 containerd[1443]: 2025-08-13 00:23:24.473 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.494405 containerd[1443]: 2025-08-13 00:23:24.474 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xdctl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f", Pod:"csi-node-driver-xdctl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9b43eee59b", MAC:"62:32:2f:e6:ea:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:24.494405 containerd[1443]: 2025-08-13 00:23:24.487 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f" Namespace="calico-system" Pod="csi-node-driver-xdctl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:24.516680 containerd[1443]: time="2025-08-13T00:23:24.516578931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:24.516680 containerd[1443]: time="2025-08-13T00:23:24.516642092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:24.517030 containerd[1443]: time="2025-08-13T00:23:24.516661412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:24.517250 containerd[1443]: time="2025-08-13T00:23:24.517209014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:24.555368 systemd[1]: Started cri-containerd-8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f.scope - libcontainer container 8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f. Aug 13 00:23:24.568664 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:24.602569 containerd[1443]: time="2025-08-13T00:23:24.602450831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xdctl,Uid:4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f\"" Aug 13 00:23:24.730919 containerd[1443]: time="2025-08-13T00:23:24.730856099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:24.731474 containerd[1443]: time="2025-08-13T00:23:24.731434542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:23:24.732309 containerd[1443]: time="2025-08-13T00:23:24.732268666Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:24.734610 containerd[1443]: time="2025-08-13T00:23:24.734563357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:24.735592 containerd[1443]: time="2025-08-13T00:23:24.735562522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.192198815s" Aug 13 00:23:24.735659 containerd[1443]: time="2025-08-13T00:23:24.735600442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:23:24.736713 containerd[1443]: time="2025-08-13T00:23:24.736539447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:23:24.737879 containerd[1443]: time="2025-08-13T00:23:24.737835133Z" level=info msg="CreateContainer within sandbox \"04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:23:24.753466 containerd[1443]: time="2025-08-13T00:23:24.753409409Z" level=info msg="CreateContainer within sandbox \"04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"722ec032caf981be0c72b2087caad3001044dc37c8a65b9088377e9935f1fac7\"" Aug 13 00:23:24.753958 containerd[1443]: time="2025-08-13T00:23:24.753924732Z" level=info msg="StartContainer for \"722ec032caf981be0c72b2087caad3001044dc37c8a65b9088377e9935f1fac7\"" Aug 13 00:23:24.781281 systemd[1]: Started cri-containerd-722ec032caf981be0c72b2087caad3001044dc37c8a65b9088377e9935f1fac7.scope - libcontainer container 722ec032caf981be0c72b2087caad3001044dc37c8a65b9088377e9935f1fac7. Aug 13 00:23:24.796566 systemd-networkd[1383]: cali59e5b6b9c5f: Gained IPv6LL Aug 13 00:23:24.813553 containerd[1443]: time="2025-08-13T00:23:24.813495223Z" level=info msg="StartContainer for \"722ec032caf981be0c72b2087caad3001044dc37c8a65b9088377e9935f1fac7\" returns successfully" Aug 13 00:23:24.923207 systemd-networkd[1383]: cali2f523c591b5: Gained IPv6LL Aug 13 00:23:25.178519 containerd[1443]: time="2025-08-13T00:23:25.178412395Z" level=info msg="StopPodSandbox for \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\"" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.223 [INFO][4708] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.224 [INFO][4708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" iface="eth0" netns="/var/run/netns/cni-e211551e-2f46-8de0-84c0-e51bafb9a2e5" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.224 [INFO][4708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" iface="eth0" netns="/var/run/netns/cni-e211551e-2f46-8de0-84c0-e51bafb9a2e5" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.225 [INFO][4708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" iface="eth0" netns="/var/run/netns/cni-e211551e-2f46-8de0-84c0-e51bafb9a2e5" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.225 [INFO][4708] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.225 [INFO][4708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.246 [INFO][4717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.247 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.247 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.255 [WARNING][4717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.255 [INFO][4717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.257 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:25.264592 containerd[1443]: 2025-08-13 00:23:25.262 [INFO][4708] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:25.264592 containerd[1443]: time="2025-08-13T00:23:25.264516650Z" level=info msg="TearDown network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\" successfully" Aug 13 00:23:25.264592 containerd[1443]: time="2025-08-13T00:23:25.264545770Z" level=info msg="StopPodSandbox for \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\" returns successfully" Aug 13 00:23:25.265540 containerd[1443]: time="2025-08-13T00:23:25.265198493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-tvpqt,Uid:1d1ad45a-7073-4dfd-8cbf-ad24b938295e,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:23:25.372121 systemd-networkd[1383]: cali54ab7533e74: Link UP Aug 13 00:23:25.372827 systemd-networkd[1383]: cali54ab7533e74: Gained carrier Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.307 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0 calico-apiserver-7d4b446f75- calico-apiserver 1d1ad45a-7073-4dfd-8cbf-ad24b938295e 974 0 2025-08-13 00:22:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4b446f75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d4b446f75-tvpqt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54ab7533e74 [] [] }} ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.307 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.329 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" HandleID="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.329 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" HandleID="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d4b446f75-tvpqt", "timestamp":"2025-08-13 00:23:25.329741725 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.329 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.330 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.330 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.339 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.343 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.350 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.352 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.354 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.354 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.355 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3 Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.358 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.367 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.367 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" host="localhost" Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.367 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:25.389452 containerd[1443]: 2025-08-13 00:23:25.367 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" HandleID="k8s-pod-network.b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.390628 containerd[1443]: 2025-08-13 00:23:25.369 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d1ad45a-7073-4dfd-8cbf-ad24b938295e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d4b446f75-tvpqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54ab7533e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:25.390628 containerd[1443]: 2025-08-13 00:23:25.369 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.390628 containerd[1443]: 2025-08-13 00:23:25.369 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54ab7533e74 ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.390628 containerd[1443]: 2025-08-13 00:23:25.373 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.390628 containerd[1443]: 2025-08-13 00:23:25.373 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d1ad45a-7073-4dfd-8cbf-ad24b938295e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3", Pod:"calico-apiserver-7d4b446f75-tvpqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54ab7533e74", MAC:"1e:c3:9c:60:bc:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:25.390628 containerd[1443]: 2025-08-13 00:23:25.386 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3" Namespace="calico-apiserver" Pod="calico-apiserver-7d4b446f75-tvpqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:25.397209 systemd[1]: run-netns-cni\x2de211551e\x2d2f46\x2d8de0\x2d84c0\x2de51bafb9a2e5.mount: Deactivated successfully. Aug 13 00:23:25.428183 containerd[1443]: time="2025-08-13T00:23:25.425689987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:25.428183 containerd[1443]: time="2025-08-13T00:23:25.425762508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:25.428183 containerd[1443]: time="2025-08-13T00:23:25.425777508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:25.428183 containerd[1443]: time="2025-08-13T00:23:25.425869948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:25.463321 systemd[1]: Started cri-containerd-b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3.scope - libcontainer container b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3. Aug 13 00:23:25.474244 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:25.492179 containerd[1443]: time="2025-08-13T00:23:25.492133348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4b446f75-tvpqt,Uid:1d1ad45a-7073-4dfd-8cbf-ad24b938295e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3\"" Aug 13 00:23:25.527825 kubelet[2478]: I0813 00:23:25.527526 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-9pbbd" podStartSLOduration=21.332223208 podStartE2EDuration="23.527507958s" podCreationTimestamp="2025-08-13 00:23:02 +0000 UTC" firstStartedPulling="2025-08-13 00:23:22.541132936 +0000 UTC m=+52.446229491" lastFinishedPulling="2025-08-13 00:23:24.736417686 +0000 UTC m=+54.641514241" observedRunningTime="2025-08-13 00:23:25.527282317 +0000 UTC m=+55.432378912" watchObservedRunningTime="2025-08-13 00:23:25.527507958 +0000 UTC m=+55.432604513" Aug 13 00:23:26.067577 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:59588.service - OpenSSH per-connection server daemon (10.0.0.1:59588). Aug 13 00:23:26.143253 sshd[4829]: Accepted publickey for core from 10.0.0.1 port 59588 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:26.147006 sshd[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:26.153160 systemd-logind[1425]: New session 8 of user core. Aug 13 00:23:26.161287 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:23:26.181776 containerd[1443]: time="2025-08-13T00:23:26.181727221Z" level=info msg="StopPodSandbox for \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\"" Aug 13 00:23:26.182030 containerd[1443]: time="2025-08-13T00:23:26.181853382Z" level=info msg="StopPodSandbox for \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\"" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.268 [INFO][4855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.269 [INFO][4855] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" iface="eth0" netns="/var/run/netns/cni-e23fb799-d3e4-db86-a87c-03aba083974e" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.269 [INFO][4855] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" iface="eth0" netns="/var/run/netns/cni-e23fb799-d3e4-db86-a87c-03aba083974e" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.270 [INFO][4855] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" iface="eth0" netns="/var/run/netns/cni-e23fb799-d3e4-db86-a87c-03aba083974e" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.270 [INFO][4855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.270 [INFO][4855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.308 [INFO][4883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.308 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.308 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.323 [WARNING][4883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.323 [INFO][4883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.325 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:26.344408 containerd[1443]: 2025-08-13 00:23:26.332 [INFO][4855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:26.349373 containerd[1443]: time="2025-08-13T00:23:26.348805417Z" level=info msg="TearDown network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\" successfully" Aug 13 00:23:26.349373 containerd[1443]: time="2025-08-13T00:23:26.348941937Z" level=info msg="StopPodSandbox for \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\" returns successfully" Aug 13 00:23:26.349516 kubelet[2478]: E0813 00:23:26.349280 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:26.355516 containerd[1443]: time="2025-08-13T00:23:26.354714205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hc28g,Uid:e841be3f-a724-4526-a7fd-880807a1af6d,Namespace:kube-system,Attempt:1,}" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.280 [INFO][4854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.280 [INFO][4854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" iface="eth0" netns="/var/run/netns/cni-832ba0cb-fc24-8e0c-45ad-24d02f275c44" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.280 [INFO][4854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" iface="eth0" netns="/var/run/netns/cni-832ba0cb-fc24-8e0c-45ad-24d02f275c44" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.280 [INFO][4854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" iface="eth0" netns="/var/run/netns/cni-832ba0cb-fc24-8e0c-45ad-24d02f275c44" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.280 [INFO][4854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.281 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.322 [INFO][4890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.322 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.326 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.348 [WARNING][4890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.348 [INFO][4890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.350 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:26.363834 containerd[1443]: 2025-08-13 00:23:26.361 [INFO][4854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:26.364502 containerd[1443]: time="2025-08-13T00:23:26.364065409Z" level=info msg="TearDown network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\" successfully" Aug 13 00:23:26.364502 containerd[1443]: time="2025-08-13T00:23:26.364100969Z" level=info msg="StopPodSandbox for \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\" returns successfully" Aug 13 00:23:26.366959 kubelet[2478]: E0813 00:23:26.364867 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:26.368674 containerd[1443]: time="2025-08-13T00:23:26.368377430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjkgc,Uid:df5c37b4-17cf-442e-8788-172a4eba1e3f,Namespace:kube-system,Attempt:1,}" Aug 13 00:23:26.395292 systemd[1]: run-containerd-runc-k8s.io-722ec032caf981be0c72b2087caad3001044dc37c8a65b9088377e9935f1fac7-runc.vH0Zxa.mount: Deactivated successfully. Aug 13 00:23:26.395428 systemd[1]: run-netns-cni\x2d832ba0cb\x2dfc24\x2d8e0c\x2d45ad\x2d24d02f275c44.mount: Deactivated successfully. Aug 13 00:23:26.395488 systemd[1]: run-netns-cni\x2de23fb799\x2dd3e4\x2ddb86\x2da87c\x2d03aba083974e.mount: Deactivated successfully. Aug 13 00:23:26.459426 systemd-networkd[1383]: calia9b43eee59b: Gained IPv6LL Aug 13 00:23:26.629193 sshd[4829]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:26.634504 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:59588.service: Deactivated successfully. Aug 13 00:23:26.636331 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:23:26.641822 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:23:26.643066 systemd-networkd[1383]: calif0ca814bba1: Link UP Aug 13 00:23:26.644711 systemd-logind[1425]: Removed session 8. Aug 13 00:23:26.645342 systemd-networkd[1383]: calif0ca814bba1: Gained carrier Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.500 [INFO][4927] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0 coredns-668d6bf9bc- kube-system df5c37b4-17cf-442e-8788-172a4eba1e3f 1017 0 2025-08-13 00:22:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wjkgc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0ca814bba1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.500 [INFO][4927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.555 [INFO][4956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" HandleID="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.555 [INFO][4956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" HandleID="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wjkgc", "timestamp":"2025-08-13 00:23:26.555030958 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.555 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.556 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.556 [INFO][4956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.575 [INFO][4956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.583 [INFO][4956] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.596 [INFO][4956] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.599 [INFO][4956] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.603 [INFO][4956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.603 [INFO][4956] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.606 [INFO][4956] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786 Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.612 [INFO][4956] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.626 [INFO][4956] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.627 [INFO][4956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" host="localhost" Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.627 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:26.666650 containerd[1443]: 2025-08-13 00:23:26.627 [INFO][4956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" HandleID="k8s-pod-network.8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.667962 containerd[1443]: 2025-08-13 00:23:26.637 [INFO][4927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df5c37b4-17cf-442e-8788-172a4eba1e3f", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wjkgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0ca814bba1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.667962 containerd[1443]: 2025-08-13 00:23:26.637 [INFO][4927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.667962 containerd[1443]: 2025-08-13 00:23:26.637 [INFO][4927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0ca814bba1 ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.667962 containerd[1443]: 2025-08-13 00:23:26.646 [INFO][4927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.667962 containerd[1443]: 2025-08-13 00:23:26.648 [INFO][4927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df5c37b4-17cf-442e-8788-172a4eba1e3f", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786", Pod:"coredns-668d6bf9bc-wjkgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0ca814bba1", MAC:"9e:b2:88:59:98:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.667962 containerd[1443]: 2025-08-13 00:23:26.661 [INFO][4927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786" Namespace="kube-system" Pod="coredns-668d6bf9bc-wjkgc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:26.719037 containerd[1443]: time="2025-08-13T00:23:26.718943658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:26.719037 containerd[1443]: time="2025-08-13T00:23:26.719009179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:26.719729 containerd[1443]: time="2025-08-13T00:23:26.719033899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.720123 containerd[1443]: time="2025-08-13T00:23:26.720050383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.750464 systemd[1]: Started cri-containerd-8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786.scope - libcontainer container 8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786. Aug 13 00:23:26.755563 systemd-networkd[1383]: calidd698746cef: Link UP Aug 13 00:23:26.757349 systemd-networkd[1383]: calidd698746cef: Gained carrier Aug 13 00:23:26.772000 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.516 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hc28g-eth0 coredns-668d6bf9bc- kube-system e841be3f-a724-4526-a7fd-880807a1af6d 1016 0 2025-08-13 00:22:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hc28g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd698746cef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.516 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.615 [INFO][4980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" HandleID="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.616 [INFO][4980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" HandleID="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000494e50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hc28g", "timestamp":"2025-08-13 00:23:26.615285925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.616 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.627 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.627 [INFO][4980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.671 [INFO][4980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.701 [INFO][4980] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.708 [INFO][4980] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.712 [INFO][4980] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.716 [INFO][4980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.716 [INFO][4980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.722 [INFO][4980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.730 [INFO][4980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.745 [INFO][4980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.745 [INFO][4980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" host="localhost" Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.745 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:26.781036 containerd[1443]: 2025-08-13 00:23:26.745 [INFO][4980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" HandleID="k8s-pod-network.7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.781644 containerd[1443]: 2025-08-13 00:23:26.749 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hc28g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e841be3f-a724-4526-a7fd-880807a1af6d", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hc28g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd698746cef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.781644 containerd[1443]: 2025-08-13 00:23:26.749 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.781644 containerd[1443]: 2025-08-13 00:23:26.749 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd698746cef ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.781644 containerd[1443]: 2025-08-13 00:23:26.758 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.781644 containerd[1443]: 2025-08-13 00:23:26.759 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hc28g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e841be3f-a724-4526-a7fd-880807a1af6d", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be", Pod:"coredns-668d6bf9bc-hc28g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd698746cef", MAC:"92:5a:90:1b:47:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.781644 containerd[1443]: 2025-08-13 00:23:26.775 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be" Namespace="kube-system" Pod="coredns-668d6bf9bc-hc28g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:26.808253 containerd[1443]: time="2025-08-13T00:23:26.808116083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjkgc,Uid:df5c37b4-17cf-442e-8788-172a4eba1e3f,Namespace:kube-system,Attempt:1,} returns sandbox id \"8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786\"" Aug 13 00:23:26.808995 kubelet[2478]: E0813 00:23:26.808962 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:26.810158 containerd[1443]: time="2025-08-13T00:23:26.809568210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:26.810158 containerd[1443]: time="2025-08-13T00:23:26.809911451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:26.810158 containerd[1443]: time="2025-08-13T00:23:26.809940491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.813801 containerd[1443]: time="2025-08-13T00:23:26.813733829Z" level=info msg="CreateContainer within sandbox \"8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:23:26.814779 containerd[1443]: time="2025-08-13T00:23:26.811507099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.841314 systemd[1]: Started cri-containerd-7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be.scope - libcontainer container 7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be. Aug 13 00:23:26.854670 containerd[1443]: time="2025-08-13T00:23:26.854610184Z" level=info msg="CreateContainer within sandbox \"8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abf2c73ccd140c9746f5f7b062f49afc21b8c77875a9bb4fb6824df247ea36af\"" Aug 13 00:23:26.855208 containerd[1443]: time="2025-08-13T00:23:26.855151186Z" level=info msg="StartContainer for \"abf2c73ccd140c9746f5f7b062f49afc21b8c77875a9bb4fb6824df247ea36af\"" Aug 13 00:23:26.862436 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:23:26.885100 containerd[1443]: time="2025-08-13T00:23:26.884914368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hc28g,Uid:e841be3f-a724-4526-a7fd-880807a1af6d,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be\"" Aug 13 00:23:26.888299 kubelet[2478]: E0813 00:23:26.888228 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:26.892729 containerd[1443]: time="2025-08-13T00:23:26.892677765Z" level=info msg="CreateContainer within sandbox \"7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:23:26.894516 systemd[1]: Started cri-containerd-abf2c73ccd140c9746f5f7b062f49afc21b8c77875a9bb4fb6824df247ea36af.scope - libcontainer container abf2c73ccd140c9746f5f7b062f49afc21b8c77875a9bb4fb6824df247ea36af. Aug 13 00:23:26.906233 containerd[1443]: time="2025-08-13T00:23:26.905692307Z" level=info msg="CreateContainer within sandbox \"7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c230dfe2dbd6a4d1ac294371a4aba0dce3f1ffcc32209b64cfa7ae1bfd75195\"" Aug 13 00:23:26.908488 containerd[1443]: time="2025-08-13T00:23:26.908380320Z" level=info msg="StartContainer for \"5c230dfe2dbd6a4d1ac294371a4aba0dce3f1ffcc32209b64cfa7ae1bfd75195\"" Aug 13 00:23:26.939108 containerd[1443]: time="2025-08-13T00:23:26.939049186Z" level=info msg="StartContainer for \"abf2c73ccd140c9746f5f7b062f49afc21b8c77875a9bb4fb6824df247ea36af\" returns successfully" Aug 13 00:23:26.944348 systemd[1]: Started cri-containerd-5c230dfe2dbd6a4d1ac294371a4aba0dce3f1ffcc32209b64cfa7ae1bfd75195.scope - libcontainer container 5c230dfe2dbd6a4d1ac294371a4aba0dce3f1ffcc32209b64cfa7ae1bfd75195. Aug 13 00:23:26.982968 containerd[1443]: time="2025-08-13T00:23:26.982912035Z" level=info msg="StartContainer for \"5c230dfe2dbd6a4d1ac294371a4aba0dce3f1ffcc32209b64cfa7ae1bfd75195\" returns successfully" Aug 13 00:23:27.115402 containerd[1443]: time="2025-08-13T00:23:27.115339178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:27.116421 containerd[1443]: time="2025-08-13T00:23:27.116101582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:23:27.117797 containerd[1443]: time="2025-08-13T00:23:27.117755349Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:27.120915 containerd[1443]: time="2025-08-13T00:23:27.120505002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:27.121813 containerd[1443]: time="2025-08-13T00:23:27.121141205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.384567678s" Aug 13 00:23:27.121908 containerd[1443]: time="2025-08-13T00:23:27.121814088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:23:27.123287 containerd[1443]: time="2025-08-13T00:23:27.123251655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:23:27.132368 containerd[1443]: time="2025-08-13T00:23:27.132315338Z" level=info msg="CreateContainer within sandbox \"31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:23:27.157368 containerd[1443]: time="2025-08-13T00:23:27.157118054Z" level=info msg="CreateContainer within sandbox \"31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4aebd1b28f4c0bf0c0dfccc82b7e4eab1e3505172aa1fe01abadb787e8199f3a\"" Aug 13 00:23:27.158686 containerd[1443]: time="2025-08-13T00:23:27.158632662Z" level=info msg="StartContainer for \"4aebd1b28f4c0bf0c0dfccc82b7e4eab1e3505172aa1fe01abadb787e8199f3a\"" Aug 13 00:23:27.192308 systemd[1]: Started cri-containerd-4aebd1b28f4c0bf0c0dfccc82b7e4eab1e3505172aa1fe01abadb787e8199f3a.scope - libcontainer container 4aebd1b28f4c0bf0c0dfccc82b7e4eab1e3505172aa1fe01abadb787e8199f3a. Aug 13 00:23:27.227249 systemd-networkd[1383]: cali54ab7533e74: Gained IPv6LL Aug 13 00:23:27.242197 containerd[1443]: time="2025-08-13T00:23:27.242149254Z" level=info msg="StartContainer for \"4aebd1b28f4c0bf0c0dfccc82b7e4eab1e3505172aa1fe01abadb787e8199f3a\" returns successfully" Aug 13 00:23:27.401380 systemd[1]: run-containerd-runc-k8s.io-8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786-runc.OWD7Mo.mount: Deactivated successfully. Aug 13 00:23:27.507163 kubelet[2478]: E0813 00:23:27.507123 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:27.516240 kubelet[2478]: E0813 00:23:27.516128 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:27.552254 kubelet[2478]: I0813 00:23:27.552182 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hc28g" podStartSLOduration=50.552158591 podStartE2EDuration="50.552158591s" podCreationTimestamp="2025-08-13 00:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:23:27.528827242 +0000 UTC m=+57.433923837" watchObservedRunningTime="2025-08-13 00:23:27.552158591 +0000 UTC m=+57.457255146" Aug 13 00:23:27.552506 kubelet[2478]: I0813 00:23:27.552355 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wjkgc" podStartSLOduration=50.552349352 podStartE2EDuration="50.552349352s" podCreationTimestamp="2025-08-13 00:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:23:27.550339863 +0000 UTC m=+57.455436418" watchObservedRunningTime="2025-08-13 00:23:27.552349352 +0000 UTC m=+57.457445907" Aug 13 00:23:27.582388 kubelet[2478]: I0813 00:23:27.582317 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c647fbdd7-hsbbn" podStartSLOduration=23.110813317 podStartE2EDuration="26.582296133s" podCreationTimestamp="2025-08-13 00:23:01 +0000 UTC" firstStartedPulling="2025-08-13 00:23:23.651288877 +0000 UTC m=+53.556385392" lastFinishedPulling="2025-08-13 00:23:27.122771653 +0000 UTC m=+57.027868208" observedRunningTime="2025-08-13 00:23:27.569070231 +0000 UTC m=+57.474166786" watchObservedRunningTime="2025-08-13 00:23:27.582296133 +0000 UTC m=+57.487392688" Aug 13 00:23:28.251283 systemd-networkd[1383]: calif0ca814bba1: Gained IPv6LL Aug 13 00:23:28.382676 systemd-networkd[1383]: calidd698746cef: Gained IPv6LL Aug 13 00:23:28.520705 kubelet[2478]: E0813 00:23:28.519942 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:28.521335 kubelet[2478]: E0813 00:23:28.521188 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:29.161696 containerd[1443]: time="2025-08-13T00:23:29.161646965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:29.162297 containerd[1443]: time="2025-08-13T00:23:29.162260408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:23:29.162951 containerd[1443]: time="2025-08-13T00:23:29.162926091Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:29.165070 containerd[1443]: time="2025-08-13T00:23:29.165010061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:29.166634 containerd[1443]: time="2025-08-13T00:23:29.165986585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.04269497s" Aug 13 00:23:29.166634 containerd[1443]: time="2025-08-13T00:23:29.166026665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:23:29.168647 containerd[1443]: time="2025-08-13T00:23:29.168616997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:23:29.169786 containerd[1443]: time="2025-08-13T00:23:29.169719162Z" level=info msg="CreateContainer within sandbox \"64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:23:29.191371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1979668709.mount: Deactivated successfully. Aug 13 00:23:29.192893 containerd[1443]: time="2025-08-13T00:23:29.192842308Z" level=info msg="CreateContainer within sandbox \"64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7356b57604de66337472b4730de6a6472d97dad2d44c5fe99150d97a1a5026a1\"" Aug 13 00:23:29.193505 containerd[1443]: time="2025-08-13T00:23:29.193472591Z" level=info msg="StartContainer for \"7356b57604de66337472b4730de6a6472d97dad2d44c5fe99150d97a1a5026a1\"" Aug 13 00:23:29.225261 systemd[1]: Started cri-containerd-7356b57604de66337472b4730de6a6472d97dad2d44c5fe99150d97a1a5026a1.scope - libcontainer container 7356b57604de66337472b4730de6a6472d97dad2d44c5fe99150d97a1a5026a1. Aug 13 00:23:29.271780 containerd[1443]: time="2025-08-13T00:23:29.271000028Z" level=info msg="StartContainer for \"7356b57604de66337472b4730de6a6472d97dad2d44c5fe99150d97a1a5026a1\" returns successfully" Aug 13 00:23:29.528923 kubelet[2478]: E0813 00:23:29.528821 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:29.530298 kubelet[2478]: E0813 00:23:29.529798 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:29.539990 kubelet[2478]: I0813 00:23:29.539602 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4b446f75-xbvj6" podStartSLOduration=26.110775539 podStartE2EDuration="31.539586142s" podCreationTimestamp="2025-08-13 00:22:58 +0000 UTC" firstStartedPulling="2025-08-13 00:23:23.737962746 +0000 UTC m=+53.643059261" lastFinishedPulling="2025-08-13 00:23:29.166773269 +0000 UTC m=+59.071869864" observedRunningTime="2025-08-13 00:23:29.53929042 +0000 UTC m=+59.444386975" watchObservedRunningTime="2025-08-13 00:23:29.539586142 +0000 UTC m=+59.444682697" Aug 13 00:23:30.173283 containerd[1443]: time="2025-08-13T00:23:30.172768202Z" level=info msg="StopPodSandbox for \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\"" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.233 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0", GenerateName:"calico-kube-controllers-7c647fbdd7-", Namespace:"calico-system", SelfLink:"", UID:"087e83b1-0f73-4043-8aea-dc61a1b40e0e", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c647fbdd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b", Pod:"calico-kube-controllers-7c647fbdd7-hsbbn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f523c591b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.234 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.234 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" iface="eth0" netns="" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.234 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.234 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.261 [INFO][5330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.262 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.262 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.270 [WARNING][5330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.270 [INFO][5330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.272 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.275990 containerd[1443]: 2025-08-13 00:23:30.273 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.276490 containerd[1443]: time="2025-08-13T00:23:30.276045872Z" level=info msg="TearDown network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\" successfully" Aug 13 00:23:30.276490 containerd[1443]: time="2025-08-13T00:23:30.276090632Z" level=info msg="StopPodSandbox for \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\" returns successfully" Aug 13 00:23:30.276945 containerd[1443]: time="2025-08-13T00:23:30.276918476Z" level=info msg="RemovePodSandbox for \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\"" Aug 13 00:23:30.281278 containerd[1443]: time="2025-08-13T00:23:30.281246415Z" level=info msg="Forcibly stopping sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\"" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.322 [WARNING][5351] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0", GenerateName:"calico-kube-controllers-7c647fbdd7-", Namespace:"calico-system", SelfLink:"", UID:"087e83b1-0f73-4043-8aea-dc61a1b40e0e", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c647fbdd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31a669f788cd60bbe10c7752acb471e1c6fe3701dbc30c9eb248eb1dde532d7b", Pod:"calico-kube-controllers-7c647fbdd7-hsbbn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f523c591b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.322 [INFO][5351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.322 [INFO][5351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" iface="eth0" netns="" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.322 [INFO][5351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.322 [INFO][5351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.346 [INFO][5360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.346 [INFO][5360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.346 [INFO][5360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.360 [WARNING][5360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.360 [INFO][5360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" HandleID="k8s-pod-network.a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Workload="localhost-k8s-calico--kube--controllers--7c647fbdd7--hsbbn-eth0" Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.362 [INFO][5360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.369115 containerd[1443]: 2025-08-13 00:23:30.367 [INFO][5351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a" Aug 13 00:23:30.369624 containerd[1443]: time="2025-08-13T00:23:30.369145495Z" level=info msg="TearDown network for sandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\" successfully" Aug 13 00:23:30.395648 containerd[1443]: time="2025-08-13T00:23:30.395595495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:30.395799 containerd[1443]: time="2025-08-13T00:23:30.395687656Z" level=info msg="RemovePodSandbox \"a42008f664153841f8ce7698b1d0887e9fb32b6bf5cedfc8a3e5027dd777349a\" returns successfully" Aug 13 00:23:30.396640 containerd[1443]: time="2025-08-13T00:23:30.396280498Z" level=info msg="StopPodSandbox for \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\"" Aug 13 00:23:30.397211 containerd[1443]: time="2025-08-13T00:23:30.397181343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.398496 containerd[1443]: time="2025-08-13T00:23:30.398463148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:23:30.399401 containerd[1443]: time="2025-08-13T00:23:30.399357872Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.403235 containerd[1443]: time="2025-08-13T00:23:30.402730008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.404518 containerd[1443]: time="2025-08-13T00:23:30.404479536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.235681858s" Aug 13 00:23:30.404518 containerd[1443]: time="2025-08-13T00:23:30.404519216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:23:30.406202 containerd[1443]: time="2025-08-13T00:23:30.406162943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:23:30.414428 containerd[1443]: time="2025-08-13T00:23:30.414063379Z" level=info msg="CreateContainer within sandbox \"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.438 [WARNING][5378] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" WorkloadEndpoint="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.438 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.438 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" iface="eth0" netns="" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.438 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.438 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.462 [INFO][5390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.462 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.462 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.471 [WARNING][5390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.471 [INFO][5390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.473 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.476605 containerd[1443]: 2025-08-13 00:23:30.475 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.477172 containerd[1443]: time="2025-08-13T00:23:30.477028545Z" level=info msg="TearDown network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\" successfully" Aug 13 00:23:30.477172 containerd[1443]: time="2025-08-13T00:23:30.477060186Z" level=info msg="StopPodSandbox for \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\" returns successfully" Aug 13 00:23:30.477655 containerd[1443]: time="2025-08-13T00:23:30.477619268Z" level=info msg="RemovePodSandbox for \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\"" Aug 13 00:23:30.477690 containerd[1443]: time="2025-08-13T00:23:30.477658148Z" level=info msg="Forcibly stopping sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\"" Aug 13 00:23:30.488759 containerd[1443]: time="2025-08-13T00:23:30.488706359Z" level=info msg="CreateContainer within sandbox \"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ee9ed503d568ce6fe5b0344da547efa3e95d6f254db7d0fcdd03745f42b2307a\"" Aug 13 00:23:30.489255 containerd[1443]: time="2025-08-13T00:23:30.489225201Z" level=info msg="StartContainer for \"ee9ed503d568ce6fe5b0344da547efa3e95d6f254db7d0fcdd03745f42b2307a\"" Aug 13 00:23:30.520924 systemd[1]: run-containerd-runc-k8s.io-ee9ed503d568ce6fe5b0344da547efa3e95d6f254db7d0fcdd03745f42b2307a-runc.IdiYoN.mount: Deactivated successfully. Aug 13 00:23:30.533682 systemd[1]: Started cri-containerd-ee9ed503d568ce6fe5b0344da547efa3e95d6f254db7d0fcdd03745f42b2307a.scope - libcontainer container ee9ed503d568ce6fe5b0344da547efa3e95d6f254db7d0fcdd03745f42b2307a. Aug 13 00:23:30.540280 kubelet[2478]: E0813 00:23:30.540244 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:30.577639 containerd[1443]: time="2025-08-13T00:23:30.577587163Z" level=info msg="StartContainer for \"ee9ed503d568ce6fe5b0344da547efa3e95d6f254db7d0fcdd03745f42b2307a\" returns successfully" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.525 [WARNING][5407] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" WorkloadEndpoint="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.525 [INFO][5407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.525 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" iface="eth0" netns="" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.525 [INFO][5407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.525 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.561 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.561 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.561 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.573 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.573 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" HandleID="k8s-pod-network.1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Workload="localhost-k8s-whisker--745fc7b96f--2lfkq-eth0" Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.574 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.582996 containerd[1443]: 2025-08-13 00:23:30.577 [INFO][5407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143" Aug 13 00:23:30.582996 containerd[1443]: time="2025-08-13T00:23:30.581373420Z" level=info msg="TearDown network for sandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\" successfully" Aug 13 00:23:30.585874 containerd[1443]: time="2025-08-13T00:23:30.585832160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:30.586062 containerd[1443]: time="2025-08-13T00:23:30.586042121Z" level=info msg="RemovePodSandbox \"1ea5ed136690a88ec1756077666b185f8e15035d1c82b7db657fd5054de22143\" returns successfully" Aug 13 00:23:30.586647 containerd[1443]: time="2025-08-13T00:23:30.586620564Z" level=info msg="StopPodSandbox for \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\"" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.625 [WARNING][5466] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hc28g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e841be3f-a724-4526-a7fd-880807a1af6d", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be", Pod:"coredns-668d6bf9bc-hc28g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd698746cef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.625 [INFO][5466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.625 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" iface="eth0" netns="" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.625 [INFO][5466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.625 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.644 [INFO][5475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.644 [INFO][5475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.644 [INFO][5475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.653 [WARNING][5475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.653 [INFO][5475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.654 [INFO][5475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.661212 containerd[1443]: 2025-08-13 00:23:30.659 [INFO][5466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.662016 containerd[1443]: time="2025-08-13T00:23:30.661266263Z" level=info msg="TearDown network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\" successfully" Aug 13 00:23:30.662016 containerd[1443]: time="2025-08-13T00:23:30.661308183Z" level=info msg="StopPodSandbox for \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\" returns successfully" Aug 13 00:23:30.662213 containerd[1443]: time="2025-08-13T00:23:30.662113387Z" level=info msg="RemovePodSandbox for \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\"" Aug 13 00:23:30.662213 containerd[1443]: time="2025-08-13T00:23:30.662142027Z" level=info msg="Forcibly stopping sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\"" Aug 13 00:23:30.713345 containerd[1443]: time="2025-08-13T00:23:30.713165859Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.714532 containerd[1443]: time="2025-08-13T00:23:30.714494145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:23:30.716197 containerd[1443]: time="2025-08-13T00:23:30.716031232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 309.827929ms" Aug 13 00:23:30.721104 containerd[1443]: time="2025-08-13T00:23:30.716068192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:23:30.721104 containerd[1443]: time="2025-08-13T00:23:30.720459612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:23:30.721936 containerd[1443]: time="2025-08-13T00:23:30.721899939Z" level=info msg="CreateContainer within sandbox \"b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:23:30.740485 containerd[1443]: time="2025-08-13T00:23:30.740366223Z" level=info msg="CreateContainer within sandbox \"b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e1f0a4302aaca1d15a7f4b1ba5e0febaf9f1e0a44513b243ea2fbf5a3311a20\"" Aug 13 00:23:30.741002 containerd[1443]: time="2025-08-13T00:23:30.740967305Z" level=info msg="StartContainer for \"3e1f0a4302aaca1d15a7f4b1ba5e0febaf9f1e0a44513b243ea2fbf5a3311a20\"" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.703 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hc28g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e841be3f-a724-4526-a7fd-880807a1af6d", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f1c7353832339c67687c3eb9c26a10cd8c62f5f73918360ff31ae2f27b332be", Pod:"coredns-668d6bf9bc-hc28g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd698746cef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.703 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.703 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" iface="eth0" netns="" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.703 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.703 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.742 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.742 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.742 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.752 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.752 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" HandleID="k8s-pod-network.b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Workload="localhost-k8s-coredns--668d6bf9bc--hc28g-eth0" Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.754 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.762569 containerd[1443]: 2025-08-13 00:23:30.760 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7" Aug 13 00:23:30.762972 containerd[1443]: time="2025-08-13T00:23:30.762605524Z" level=info msg="TearDown network for sandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\" successfully" Aug 13 00:23:30.770503 containerd[1443]: time="2025-08-13T00:23:30.770430359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:30.771318 containerd[1443]: time="2025-08-13T00:23:30.770623200Z" level=info msg="RemovePodSandbox \"b92862df1d20f884431ccd1be37ea530067f41e9e2539e324e6366100348bda7\" returns successfully" Aug 13 00:23:30.771318 containerd[1443]: time="2025-08-13T00:23:30.771142563Z" level=info msg="StopPodSandbox for \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\"" Aug 13 00:23:30.776474 systemd[1]: Started cri-containerd-3e1f0a4302aaca1d15a7f4b1ba5e0febaf9f1e0a44513b243ea2fbf5a3311a20.scope - libcontainer container 3e1f0a4302aaca1d15a7f4b1ba5e0febaf9f1e0a44513b243ea2fbf5a3311a20. Aug 13 00:23:30.823629 containerd[1443]: time="2025-08-13T00:23:30.823529241Z" level=info msg="StartContainer for \"3e1f0a4302aaca1d15a7f4b1ba5e0febaf9f1e0a44513b243ea2fbf5a3311a20\" returns successfully" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.847 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2", Pod:"goldmane-768f4c5c69-9pbbd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14e8fad0ee4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.848 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.848 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" iface="eth0" netns="" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.848 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.848 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.875 [INFO][5562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.875 [INFO][5562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.875 [INFO][5562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.884 [WARNING][5562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.884 [INFO][5562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.886 [INFO][5562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.892003 containerd[1443]: 2025-08-13 00:23:30.890 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.892491 containerd[1443]: time="2025-08-13T00:23:30.892038832Z" level=info msg="TearDown network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\" successfully" Aug 13 00:23:30.892491 containerd[1443]: time="2025-08-13T00:23:30.892064912Z" level=info msg="StopPodSandbox for \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\" returns successfully" Aug 13 00:23:30.892731 containerd[1443]: time="2025-08-13T00:23:30.892687795Z" level=info msg="RemovePodSandbox for \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\"" Aug 13 00:23:30.892761 containerd[1443]: time="2025-08-13T00:23:30.892744595Z" level=info msg="Forcibly stopping sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\"" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.930 [WARNING][5585] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5e9c7520-456a-4fb4-9e17-2a8c3cda47aa", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04606f2a0d90819596677064c25f87a86f14207b58b32b28023a4add9f51f3e2", Pod:"goldmane-768f4c5c69-9pbbd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14e8fad0ee4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.931 [INFO][5585] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.931 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" iface="eth0" netns="" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.931 [INFO][5585] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.931 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.955 [INFO][5594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.956 [INFO][5594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.956 [INFO][5594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.975 [WARNING][5594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.975 [INFO][5594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" HandleID="k8s-pod-network.31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Workload="localhost-k8s-goldmane--768f4c5c69--9pbbd-eth0" Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.977 [INFO][5594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.988044 containerd[1443]: 2025-08-13 00:23:30.983 [INFO][5585] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856" Aug 13 00:23:30.988044 containerd[1443]: time="2025-08-13T00:23:30.987928508Z" level=info msg="TearDown network for sandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\" successfully" Aug 13 00:23:30.995905 containerd[1443]: time="2025-08-13T00:23:30.995281662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:30.995905 containerd[1443]: time="2025-08-13T00:23:30.995355022Z" level=info msg="RemovePodSandbox \"31af6f341fc50f702cb53c5797ef5010180b06cbd58cf9d27c53c9b6528bf856\" returns successfully" Aug 13 00:23:30.996159 containerd[1443]: time="2025-08-13T00:23:30.996126425Z" level=info msg="StopPodSandbox for \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\"" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.043 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xdctl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f", Pod:"csi-node-driver-xdctl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9b43eee59b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.044 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.044 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" iface="eth0" netns="" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.044 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.044 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.074 [INFO][5620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.074 [INFO][5620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.075 [INFO][5620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.086 [WARNING][5620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.086 [INFO][5620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.088 [INFO][5620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.095439 containerd[1443]: 2025-08-13 00:23:31.093 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.095931 containerd[1443]: time="2025-08-13T00:23:31.095477593Z" level=info msg="TearDown network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\" successfully" Aug 13 00:23:31.095931 containerd[1443]: time="2025-08-13T00:23:31.095504673Z" level=info msg="StopPodSandbox for \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\" returns successfully" Aug 13 00:23:31.096379 containerd[1443]: time="2025-08-13T00:23:31.096202916Z" level=info msg="RemovePodSandbox for \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\"" Aug 13 00:23:31.096509 containerd[1443]: time="2025-08-13T00:23:31.096458517Z" level=info msg="Forcibly stopping sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\"" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.139 [WARNING][5638] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xdctl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4fd3051e-ecbd-4cf8-b840-da7c4d8d1f77", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f", Pod:"csi-node-driver-xdctl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9b43eee59b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.140 [INFO][5638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.140 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" iface="eth0" netns="" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.140 [INFO][5638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.140 [INFO][5638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.168 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.169 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.169 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.177 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.177 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" HandleID="k8s-pod-network.b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Workload="localhost-k8s-csi--node--driver--xdctl-eth0" Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.178 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.183570 containerd[1443]: 2025-08-13 00:23:31.181 [INFO][5638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c" Aug 13 00:23:31.184177 containerd[1443]: time="2025-08-13T00:23:31.183609109Z" level=info msg="TearDown network for sandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\" successfully" Aug 13 00:23:31.186952 containerd[1443]: time="2025-08-13T00:23:31.186889124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:31.187065 containerd[1443]: time="2025-08-13T00:23:31.186978885Z" level=info msg="RemovePodSandbox \"b31923d85e60626586fa83b56eb78687ab6ed94563c3ad8f01a8961b39a0053c\" returns successfully" Aug 13 00:23:31.187556 containerd[1443]: time="2025-08-13T00:23:31.187521127Z" level=info msg="StopPodSandbox for \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\"" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.232 [WARNING][5665] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df5c37b4-17cf-442e-8788-172a4eba1e3f", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786", Pod:"coredns-668d6bf9bc-wjkgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0ca814bba1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.232 [INFO][5665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.233 [INFO][5665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" iface="eth0" netns="" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.233 [INFO][5665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.233 [INFO][5665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.256 [INFO][5674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.257 [INFO][5674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.257 [INFO][5674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.272 [WARNING][5674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.272 [INFO][5674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.273 [INFO][5674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.277678 containerd[1443]: 2025-08-13 00:23:31.275 [INFO][5665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.278703 containerd[1443]: time="2025-08-13T00:23:31.277656933Z" level=info msg="TearDown network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\" successfully" Aug 13 00:23:31.278745 containerd[1443]: time="2025-08-13T00:23:31.278700737Z" level=info msg="StopPodSandbox for \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\" returns successfully" Aug 13 00:23:31.279294 containerd[1443]: time="2025-08-13T00:23:31.279217340Z" level=info msg="RemovePodSandbox for \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\"" Aug 13 00:23:31.279294 containerd[1443]: time="2025-08-13T00:23:31.279248860Z" level=info msg="Forcibly stopping sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\"" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.312 [WARNING][5692] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df5c37b4-17cf-442e-8788-172a4eba1e3f", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8727dff879d0559443a28c2d25c27ea82749d7d8b3e3cc090e16a205b9acd786", Pod:"coredns-668d6bf9bc-wjkgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0ca814bba1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.312 [INFO][5692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.312 [INFO][5692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" iface="eth0" netns="" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.312 [INFO][5692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.312 [INFO][5692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.333 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.333 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.333 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.347 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.347 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" HandleID="k8s-pod-network.f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Workload="localhost-k8s-coredns--668d6bf9bc--wjkgc-eth0" Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.349 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.355376 containerd[1443]: 2025-08-13 00:23:31.351 [INFO][5692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380" Aug 13 00:23:31.355827 containerd[1443]: time="2025-08-13T00:23:31.355483243Z" level=info msg="TearDown network for sandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\" successfully" Aug 13 00:23:31.360542 containerd[1443]: time="2025-08-13T00:23:31.360479785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:31.360670 containerd[1443]: time="2025-08-13T00:23:31.360557386Z" level=info msg="RemovePodSandbox \"f65804882c7aa59cbf46e74b58982ac6c8e125d04ee8280748ae26fa12528380\" returns successfully" Aug 13 00:23:31.361286 containerd[1443]: time="2025-08-13T00:23:31.361254189Z" level=info msg="StopPodSandbox for \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\"" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.401 [WARNING][5719] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d1ad45a-7073-4dfd-8cbf-ad24b938295e", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3", Pod:"calico-apiserver-7d4b446f75-tvpqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54ab7533e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.401 [INFO][5719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.401 [INFO][5719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" iface="eth0" netns="" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.401 [INFO][5719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.401 [INFO][5719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.432 [INFO][5727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.432 [INFO][5727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.432 [INFO][5727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.452 [WARNING][5727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.452 [INFO][5727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.454 [INFO][5727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.458261 containerd[1443]: 2025-08-13 00:23:31.456 [INFO][5719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.458870 containerd[1443]: time="2025-08-13T00:23:31.458306586Z" level=info msg="TearDown network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\" successfully" Aug 13 00:23:31.458870 containerd[1443]: time="2025-08-13T00:23:31.458334786Z" level=info msg="StopPodSandbox for \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\" returns successfully" Aug 13 00:23:31.459415 containerd[1443]: time="2025-08-13T00:23:31.459057989Z" level=info msg="RemovePodSandbox for \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\"" Aug 13 00:23:31.459415 containerd[1443]: time="2025-08-13T00:23:31.459118869Z" level=info msg="Forcibly stopping sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\"" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.498 [WARNING][5747] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d1ad45a-7073-4dfd-8cbf-ad24b938295e", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b541ccaf0999aa9deed05d65433152b61e71764c2c794f050a867e7f74fd8ea3", Pod:"calico-apiserver-7d4b446f75-tvpqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54ab7533e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.498 [INFO][5747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.498 [INFO][5747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" iface="eth0" netns="" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.499 [INFO][5747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.499 [INFO][5747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.520 [INFO][5757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.520 [INFO][5757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.520 [INFO][5757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.529 [WARNING][5757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.529 [INFO][5757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" HandleID="k8s-pod-network.db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Workload="localhost-k8s-calico--apiserver--7d4b446f75--tvpqt-eth0" Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.531 [INFO][5757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.539833 containerd[1443]: 2025-08-13 00:23:31.537 [INFO][5747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27" Aug 13 00:23:31.539833 containerd[1443]: time="2025-08-13T00:23:31.539764592Z" level=info msg="TearDown network for sandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\" successfully" Aug 13 00:23:31.544339 containerd[1443]: time="2025-08-13T00:23:31.544306973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:31.544648 containerd[1443]: time="2025-08-13T00:23:31.544623134Z" level=info msg="RemovePodSandbox \"db6460c0e4aba4e800cdb09e459f4a0ea114851267ca60b8c9643d568e343b27\" returns successfully" Aug 13 00:23:31.546862 containerd[1443]: time="2025-08-13T00:23:31.546780504Z" level=info msg="StopPodSandbox for \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\"" Aug 13 00:23:31.575634 kubelet[2478]: I0813 00:23:31.574495 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4b446f75-tvpqt" podStartSLOduration=28.348090694 podStartE2EDuration="33.574475709s" podCreationTimestamp="2025-08-13 00:22:58 +0000 UTC" firstStartedPulling="2025-08-13 00:23:25.493611635 +0000 UTC m=+55.398708190" lastFinishedPulling="2025-08-13 00:23:30.71999665 +0000 UTC m=+60.625093205" observedRunningTime="2025-08-13 00:23:31.573535984 +0000 UTC m=+61.478632499" watchObservedRunningTime="2025-08-13 00:23:31.574475709 +0000 UTC m=+61.479572304" Aug 13 00:23:31.642991 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:59604.service - OpenSSH per-connection server daemon (10.0.0.1:59604). Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.605 [WARNING][5775] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7a20528-2e49-4684-bead-e1d4d74e5a78", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9", Pod:"calico-apiserver-7d4b446f75-xbvj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59e5b6b9c5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.606 [INFO][5775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.606 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" iface="eth0" netns="" Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.606 [INFO][5775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.606 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.628 [INFO][5785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.628 [INFO][5785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.628 [INFO][5785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.640 [WARNING][5785] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.640 [INFO][5785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.645 [INFO][5785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.656121 containerd[1443]: 2025-08-13 00:23:31.651 [INFO][5775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.656816 containerd[1443]: time="2025-08-13T00:23:31.656173036Z" level=info msg="TearDown network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\" successfully" Aug 13 00:23:31.656816 containerd[1443]: time="2025-08-13T00:23:31.656199996Z" level=info msg="StopPodSandbox for \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\" returns successfully" Aug 13 00:23:31.657343 containerd[1443]: time="2025-08-13T00:23:31.656935240Z" level=info msg="RemovePodSandbox for \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\"" Aug 13 00:23:31.657343 containerd[1443]: time="2025-08-13T00:23:31.656972520Z" level=info msg="Forcibly stopping sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\"" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.695 [WARNING][5804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0", GenerateName:"calico-apiserver-7d4b446f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7a20528-2e49-4684-bead-e1d4d74e5a78", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4b446f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64ef87cc3cbc787341da26abde87093814fb3e4a9a04c55dd7d7eb897789cbc9", Pod:"calico-apiserver-7d4b446f75-xbvj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59e5b6b9c5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.695 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.695 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" iface="eth0" netns="" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.695 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.695 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.714 [INFO][5812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.714 [INFO][5812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.714 [INFO][5812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.751 [WARNING][5812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.751 [INFO][5812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" HandleID="k8s-pod-network.d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Workload="localhost-k8s-calico--apiserver--7d4b446f75--xbvj6-eth0" Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.754 [INFO][5812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:31.763608 containerd[1443]: 2025-08-13 00:23:31.759 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7" Aug 13 00:23:31.763608 containerd[1443]: time="2025-08-13T00:23:31.763421479Z" level=info msg="TearDown network for sandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\" successfully" Aug 13 00:23:31.801774 containerd[1443]: time="2025-08-13T00:23:31.801566851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:31.801774 containerd[1443]: time="2025-08-13T00:23:31.801738571Z" level=info msg="RemovePodSandbox \"d44b7c2c0bc474ba88d90687f66fed5d2598a9a3eb1bbdd424f70a427754c9a7\" returns successfully" Aug 13 00:23:31.831655 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 59604 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:31.833224 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:31.838210 systemd-logind[1425]: New session 9 of user core. Aug 13 00:23:31.845237 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:23:32.046836 containerd[1443]: time="2025-08-13T00:23:32.045569027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:32.047609 containerd[1443]: time="2025-08-13T00:23:32.047562116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:23:32.048624 containerd[1443]: time="2025-08-13T00:23:32.048586440Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:32.050955 containerd[1443]: time="2025-08-13T00:23:32.050916171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:32.053120 containerd[1443]: time="2025-08-13T00:23:32.052235297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.331739685s" Aug 13 00:23:32.053120 containerd[1443]: time="2025-08-13T00:23:32.052270337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:23:32.054618 containerd[1443]: time="2025-08-13T00:23:32.054567827Z" level=info msg="CreateContainer within sandbox \"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:23:32.070385 containerd[1443]: time="2025-08-13T00:23:32.069754655Z" level=info msg="CreateContainer within sandbox \"8a00db2a918a0ad683e92e78e2a652bbe64805814aaa2a9df9f4c7faa110bd0f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a0b2a1127a2e31b455dbdf2d487b0d56ad5bed949c6811cff568469158b6e928\"" Aug 13 00:23:32.070017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302115326.mount: Deactivated successfully. Aug 13 00:23:32.072047 containerd[1443]: time="2025-08-13T00:23:32.070844420Z" level=info msg="StartContainer for \"a0b2a1127a2e31b455dbdf2d487b0d56ad5bed949c6811cff568469158b6e928\"" Aug 13 00:23:32.116252 systemd[1]: Started cri-containerd-a0b2a1127a2e31b455dbdf2d487b0d56ad5bed949c6811cff568469158b6e928.scope - libcontainer container a0b2a1127a2e31b455dbdf2d487b0d56ad5bed949c6811cff568469158b6e928. Aug 13 00:23:32.156887 containerd[1443]: time="2025-08-13T00:23:32.155383157Z" level=info msg="StartContainer for \"a0b2a1127a2e31b455dbdf2d487b0d56ad5bed949c6811cff568469158b6e928\" returns successfully" Aug 13 00:23:32.279914 kubelet[2478]: I0813 00:23:32.279870 2478 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:23:32.279914 kubelet[2478]: I0813 00:23:32.279930 2478 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:23:32.421479 sshd[5793]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:32.426663 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:59604.service: Deactivated successfully. Aug 13 00:23:32.428416 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:23:32.430036 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:23:32.431923 systemd-logind[1425]: Removed session 9. Aug 13 00:23:32.487583 systemd[1]: run-containerd-runc-k8s.io-a0b2a1127a2e31b455dbdf2d487b0d56ad5bed949c6811cff568469158b6e928-runc.098NoP.mount: Deactivated successfully. Aug 13 00:23:32.563667 kubelet[2478]: I0813 00:23:32.563603 2478 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:32.574245 kubelet[2478]: I0813 00:23:32.574185 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xdctl" podStartSLOduration=24.125586085 podStartE2EDuration="31.574166784s" podCreationTimestamp="2025-08-13 00:23:01 +0000 UTC" firstStartedPulling="2025-08-13 00:23:24.60425632 +0000 UTC m=+54.509352835" lastFinishedPulling="2025-08-13 00:23:32.052836979 +0000 UTC m=+61.957933534" observedRunningTime="2025-08-13 00:23:32.5734047 +0000 UTC m=+62.478501255" watchObservedRunningTime="2025-08-13 00:23:32.574166784 +0000 UTC m=+62.479263339" Aug 13 00:23:37.139642 kubelet[2478]: I0813 00:23:37.139451 2478 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:37.433735 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:60452.service - OpenSSH per-connection server daemon (10.0.0.1:60452). Aug 13 00:23:37.533976 sshd[5891]: Accepted publickey for core from 10.0.0.1 port 60452 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:37.535989 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:37.540281 systemd-logind[1425]: New session 10 of user core. Aug 13 00:23:37.549314 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:23:37.856110 sshd[5891]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:37.871858 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:60452.service: Deactivated successfully. Aug 13 00:23:37.874687 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:23:37.876397 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:23:37.885442 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:60464.service - OpenSSH per-connection server daemon (10.0.0.1:60464). Aug 13 00:23:37.888657 systemd-logind[1425]: Removed session 10. Aug 13 00:23:37.925401 sshd[5906]: Accepted publickey for core from 10.0.0.1 port 60464 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:37.927011 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:37.934515 systemd-logind[1425]: New session 11 of user core. Aug 13 00:23:37.946313 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:23:38.182961 sshd[5906]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:38.196436 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:60464.service: Deactivated successfully. Aug 13 00:23:38.200468 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:23:38.206539 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:23:38.218640 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:60474.service - OpenSSH per-connection server daemon (10.0.0.1:60474). Aug 13 00:23:38.221389 systemd-logind[1425]: Removed session 11. Aug 13 00:23:38.254200 sshd[5919]: Accepted publickey for core from 10.0.0.1 port 60474 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:38.256122 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:38.263062 systemd-logind[1425]: New session 12 of user core. Aug 13 00:23:38.272854 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:23:38.447911 sshd[5919]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:38.454772 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:60474.service: Deactivated successfully. Aug 13 00:23:38.458986 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:23:38.460064 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:23:38.461699 systemd-logind[1425]: Removed session 12. Aug 13 00:23:43.460672 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:42770.service - OpenSSH per-connection server daemon (10.0.0.1:42770). Aug 13 00:23:43.498214 sshd[5945]: Accepted publickey for core from 10.0.0.1 port 42770 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:43.499785 sshd[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:43.507247 systemd-logind[1425]: New session 13 of user core. Aug 13 00:23:43.514712 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:23:43.649316 sshd[5945]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:43.659887 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:42770.service: Deactivated successfully. Aug 13 00:23:43.662920 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:23:43.667890 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:23:43.677652 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:42780.service - OpenSSH per-connection server daemon (10.0.0.1:42780). Aug 13 00:23:43.679524 systemd-logind[1425]: Removed session 13. Aug 13 00:23:43.714335 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 42780 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:43.715756 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:43.720140 systemd-logind[1425]: New session 14 of user core. Aug 13 00:23:43.735238 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:23:43.955609 sshd[5959]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:43.967294 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:42780.service: Deactivated successfully. Aug 13 00:23:43.972612 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:23:43.978776 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:23:43.991406 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:42788.service - OpenSSH per-connection server daemon (10.0.0.1:42788). Aug 13 00:23:43.992534 systemd-logind[1425]: Removed session 14. Aug 13 00:23:44.026090 sshd[5972]: Accepted publickey for core from 10.0.0.1 port 42788 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:44.027427 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:44.031852 systemd-logind[1425]: New session 15 of user core. Aug 13 00:23:44.041262 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:23:44.624234 sshd[5972]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:44.635599 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:42788.service: Deactivated successfully. Aug 13 00:23:44.637418 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:23:44.639692 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:23:44.652569 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:42792.service - OpenSSH per-connection server daemon (10.0.0.1:42792). Aug 13 00:23:44.657118 systemd-logind[1425]: Removed session 15. Aug 13 00:23:44.691344 sshd[5992]: Accepted publickey for core from 10.0.0.1 port 42792 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:44.692775 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:44.698907 systemd-logind[1425]: New session 16 of user core. Aug 13 00:23:44.713296 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:23:45.175287 sshd[5992]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:45.187270 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:42792.service: Deactivated successfully. Aug 13 00:23:45.189324 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:23:45.191156 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:23:45.192613 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:42798.service - OpenSSH per-connection server daemon (10.0.0.1:42798). Aug 13 00:23:45.195199 systemd-logind[1425]: Removed session 16. Aug 13 00:23:45.250032 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 42798 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:45.251817 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:45.256183 systemd-logind[1425]: New session 17 of user core. Aug 13 00:23:45.264411 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:23:45.418733 sshd[6006]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:45.422302 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:42798.service: Deactivated successfully. Aug 13 00:23:45.425280 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:23:45.426270 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:23:45.427544 systemd-logind[1425]: Removed session 17. Aug 13 00:23:50.430200 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:42802.service - OpenSSH per-connection server daemon (10.0.0.1:42802). Aug 13 00:23:50.473745 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 42802 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:50.474758 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:50.481432 systemd-logind[1425]: New session 18 of user core. Aug 13 00:23:50.492327 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:23:50.648802 sshd[6043]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:50.652943 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:42802.service: Deactivated successfully. Aug 13 00:23:50.657169 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:23:50.657915 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:23:50.658955 systemd-logind[1425]: Removed session 18. Aug 13 00:23:51.178312 kubelet[2478]: E0813 00:23:51.178271 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:52.179738 kubelet[2478]: E0813 00:23:52.179689 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:23:55.662381 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:57102.service - OpenSSH per-connection server daemon (10.0.0.1:57102). Aug 13 00:23:55.706203 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 57102 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:23:55.708027 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:55.713672 systemd-logind[1425]: New session 19 of user core. Aug 13 00:23:55.722452 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:23:56.054067 sshd[6060]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:56.064034 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:57102.service: Deactivated successfully. Aug 13 00:23:56.070541 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:23:56.072456 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:23:56.073962 systemd-logind[1425]: Removed session 19. Aug 13 00:23:56.178917 kubelet[2478]: E0813 00:23:56.178500 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:24:01.066004 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:57108.service - OpenSSH per-connection server daemon (10.0.0.1:57108). Aug 13 00:24:01.106872 sshd[6145]: Accepted publickey for core from 10.0.0.1 port 57108 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:24:01.110438 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:01.117091 systemd-logind[1425]: New session 20 of user core. Aug 13 00:24:01.123295 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:24:01.177823 kubelet[2478]: E0813 00:24:01.177782 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:24:01.288312 sshd[6145]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:01.293223 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:57108.service: Deactivated successfully. Aug 13 00:24:01.297354 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:24:01.300407 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:24:01.302870 systemd-logind[1425]: Removed session 20.