Jul 12 00:16:46.949755 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:16:46.949776 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:16:46.949786 kernel: KASLR enabled Jul 12 00:16:46.949791 kernel: efi: EFI v2.7 by EDK II Jul 12 00:16:46.949797 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 12 00:16:46.949802 kernel: random: crng init done Jul 12 00:16:46.949809 kernel: ACPI: Early table checksum verification disabled Jul 12 00:16:46.949815 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 12 00:16:46.949821 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:16:46.949828 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949834 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949840 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949846 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949852 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949859 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949873 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949879 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949886 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:46.949892 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:16:46.949899 kernel: NUMA: Failed to initialise from firmware Jul 12 00:16:46.949905 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:16:46.949911 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 12 00:16:46.949918 kernel: Zone ranges: Jul 12 00:16:46.949924 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:16:46.949933 kernel: DMA32 empty Jul 12 00:16:46.949941 kernel: Normal empty Jul 12 00:16:46.949947 kernel: Movable zone start for each node Jul 12 00:16:46.949954 kernel: Early memory node ranges Jul 12 00:16:46.949960 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 12 00:16:46.949969 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 12 00:16:46.949976 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 12 00:16:46.949982 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:16:46.949989 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:16:46.949995 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:16:46.950001 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:16:46.950008 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:16:46.950015 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:16:46.950022 kernel: psci: probing for conduit method from ACPI. Jul 12 00:16:46.950029 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:16:46.950035 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:16:46.950044 kernel: psci: Trusted OS migration not required Jul 12 00:16:46.950051 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:16:46.950063 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:16:46.950072 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:16:46.950079 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:16:46.950086 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:16:46.950093 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:16:46.950100 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:16:46.950106 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:16:46.950113 kernel: CPU features: detected: Spectre-v4 Jul 12 00:16:46.950120 kernel: CPU features: detected: Spectre-BHB Jul 12 00:16:46.950126 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:16:46.950134 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:16:46.950142 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:16:46.950149 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:16:46.950155 kernel: alternatives: applying boot alternatives Jul 12 00:16:46.950163 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:16:46.950174 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:16:46.950192 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:16:46.950198 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:16:46.950205 kernel: Fallback order for Node 0: 0 Jul 12 00:16:46.950212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:16:46.950218 kernel: Policy zone: DMA Jul 12 00:16:46.950225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:16:46.950233 kernel: software IO TLB: area num 4. Jul 12 00:16:46.950240 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 12 00:16:46.950247 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 12 00:16:46.950254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:16:46.950261 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:16:46.950269 kernel: rcu: RCU event tracing is enabled. Jul 12 00:16:46.950276 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:16:46.950283 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:16:46.950290 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:16:46.950297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:16:46.950304 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:16:46.950311 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:16:46.950319 kernel: GICv3: 256 SPIs implemented Jul 12 00:16:46.950326 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:16:46.950332 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:16:46.950339 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:16:46.950345 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:16:46.950352 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:16:46.950359 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:16:46.950366 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:16:46.950374 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 12 00:16:46.950393 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 12 00:16:46.950404 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:16:46.950413 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:46.950420 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:16:46.950427 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:16:46.950434 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:16:46.950441 kernel: arm-pv: using stolen time PV Jul 12 00:16:46.950448 kernel: Console: colour dummy device 80x25 Jul 12 00:16:46.950456 kernel: ACPI: Core revision 20230628 Jul 12 00:16:46.950463 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:16:46.950470 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:16:46.950477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:16:46.950485 kernel: landlock: Up and running. Jul 12 00:16:46.950492 kernel: SELinux: Initializing. Jul 12 00:16:46.950498 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:16:46.950506 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:16:46.950513 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:16:46.950520 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:16:46.950526 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:16:46.950533 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:16:46.950540 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:16:46.950548 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:16:46.950555 kernel: Remapping and enabling EFI services. Jul 12 00:16:46.950561 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:16:46.950568 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:16:46.950575 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:16:46.950582 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 12 00:16:46.950589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:46.950595 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:16:46.950602 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:16:46.950609 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:16:46.950617 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 12 00:16:46.950625 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:46.950636 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:16:46.950644 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:16:46.950651 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:16:46.950659 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 12 00:16:46.950666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:46.950673 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:16:46.950680 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:16:46.950689 kernel: SMP: Total of 4 processors activated. Jul 12 00:16:46.950696 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:16:46.950703 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:16:46.950710 kernel: CPU features: detected: Common not Private translations Jul 12 00:16:46.950718 kernel: CPU features: detected: CRC32 instructions Jul 12 00:16:46.950725 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:16:46.950732 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:16:46.950739 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:16:46.950747 kernel: CPU features: detected: Privileged Access Never Jul 12 00:16:46.950755 kernel: CPU features: detected: RAS Extension Support Jul 12 00:16:46.950762 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:16:46.950769 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:16:46.950776 kernel: alternatives: applying system-wide alternatives Jul 12 00:16:46.950783 kernel: devtmpfs: initialized Jul 12 00:16:46.950790 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:16:46.950798 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:16:46.950805 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:16:46.950813 kernel: SMBIOS 3.0.0 present. Jul 12 00:16:46.950821 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 12 00:16:46.950828 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:16:46.950835 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:16:46.950843 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:16:46.950851 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:16:46.950858 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:16:46.950866 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 12 00:16:46.950873 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:16:46.950881 kernel: cpuidle: using governor menu Jul 12 00:16:46.950889 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:16:46.950896 kernel: ASID allocator initialised with 32768 entries Jul 12 00:16:46.950903 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:16:46.950910 kernel: Serial: AMBA PL011 UART driver Jul 12 00:16:46.950917 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:16:46.950925 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:16:46.950932 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:16:46.950939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:16:46.950948 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:16:46.950955 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:16:46.950962 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:16:46.950969 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:16:46.950977 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:16:46.950984 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:16:46.950991 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:16:46.950998 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:16:46.951005 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:16:46.951013 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:16:46.951021 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:16:46.951028 kernel: ACPI: Interpreter enabled Jul 12 00:16:46.951035 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:16:46.951043 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:16:46.951050 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:16:46.951057 kernel: printk: console [ttyAMA0] enabled Jul 12 00:16:46.951069 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:16:46.951204 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:16:46.951284 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:16:46.951353 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:16:46.951435 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:16:46.951504 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:16:46.951513 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:16:46.951521 kernel: PCI host bridge to bus 0000:00 Jul 12 00:16:46.951597 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:16:46.951665 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:16:46.951724 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:16:46.951780 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:16:46.951873 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:16:46.951950 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:16:46.952018 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:16:46.952102 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:16:46.952172 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:16:46.952246 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:16:46.952325 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:16:46.952425 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:16:46.952494 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:16:46.952566 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:16:46.952648 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:16:46.952659 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:16:46.952666 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:16:46.952674 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:16:46.952682 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:16:46.952689 kernel: iommu: Default domain type: Translated Jul 12 00:16:46.952696 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:16:46.952703 kernel: efivars: Registered efivars operations Jul 12 00:16:46.952713 kernel: vgaarb: loaded Jul 12 00:16:46.952720 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:16:46.952728 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:16:46.952735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:16:46.952742 kernel: pnp: PnP ACPI init Jul 12 00:16:46.952813 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:16:46.952824 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:16:46.952831 kernel: NET: Registered PF_INET protocol family Jul 12 00:16:46.952838 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:16:46.952853 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:16:46.952861 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:16:46.952869 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:16:46.952876 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:16:46.952883 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:16:46.952891 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:16:46.952898 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:16:46.952905 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:16:46.952914 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:16:46.952921 kernel: kvm [1]: HYP mode not available Jul 12 00:16:46.952929 kernel: Initialise system trusted keyrings Jul 12 00:16:46.952939 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:16:46.952946 kernel: Key type asymmetric registered Jul 12 00:16:46.952954 kernel: Asymmetric key parser 'x509' registered Jul 12 00:16:46.952961 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:16:46.952968 kernel: io scheduler mq-deadline registered Jul 12 00:16:46.952976 kernel: io scheduler kyber registered Jul 12 00:16:46.952983 kernel: io scheduler bfq registered Jul 12 00:16:46.952992 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:16:46.952999 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:16:46.953007 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:16:46.953085 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:16:46.953096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:16:46.953104 kernel: thunder_xcv, ver 1.0 Jul 12 00:16:46.953111 kernel: thunder_bgx, ver 1.0 Jul 12 00:16:46.953118 kernel: nicpf, ver 1.0 Jul 12 00:16:46.953125 kernel: nicvf, ver 1.0 Jul 12 00:16:46.953202 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:16:46.953264 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:16:46 UTC (1752279406) Jul 12 00:16:46.953274 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:16:46.953282 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:16:46.953289 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:16:46.953297 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:16:46.953304 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:16:46.953311 kernel: Segment Routing with IPv6 Jul 12 00:16:46.953321 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:16:46.953328 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:16:46.953336 kernel: Key type dns_resolver registered Jul 12 00:16:46.953343 kernel: registered taskstats version 1 Jul 12 00:16:46.953350 kernel: Loading compiled-in X.509 certificates Jul 12 00:16:46.953357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:16:46.953364 kernel: Key type .fscrypt registered Jul 12 00:16:46.953371 kernel: Key type fscrypt-provisioning registered Jul 12 00:16:46.953460 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:16:46.953471 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:16:46.953479 kernel: ima: No architecture policies found Jul 12 00:16:46.953486 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:16:46.953493 kernel: clk: Disabling unused clocks Jul 12 00:16:46.953500 kernel: Freeing unused kernel memory: 39424K Jul 12 00:16:46.953507 kernel: Run /init as init process Jul 12 00:16:46.953514 kernel: with arguments: Jul 12 00:16:46.953521 kernel: /init Jul 12 00:16:46.953529 kernel: with environment: Jul 12 00:16:46.953537 kernel: HOME=/ Jul 12 00:16:46.953544 kernel: TERM=linux Jul 12 00:16:46.953551 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:16:46.953560 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:16:46.953569 systemd[1]: Detected virtualization kvm. Jul 12 00:16:46.953577 systemd[1]: Detected architecture arm64. Jul 12 00:16:46.953584 systemd[1]: Running in initrd. Jul 12 00:16:46.953593 systemd[1]: No hostname configured, using default hostname. Jul 12 00:16:46.953601 systemd[1]: Hostname set to . Jul 12 00:16:46.953609 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:16:46.953616 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:16:46.953624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:46.953632 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:46.953640 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:16:46.953648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:16:46.953657 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:16:46.953665 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:16:46.953675 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:16:46.953683 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:16:46.953691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:46.953699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:46.953707 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:16:46.953715 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:16:46.953723 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:16:46.953731 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:16:46.953739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:16:46.953747 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:16:46.953755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:16:46.953762 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:16:46.953770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:46.953778 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:46.953787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:46.953795 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:16:46.953803 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:16:46.953811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:16:46.953819 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:16:46.953826 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:16:46.953834 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:16:46.953842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:16:46.953851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:46.953859 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:16:46.953874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:46.953882 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:16:46.953890 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:16:46.953900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:16:46.953929 systemd-journald[237]: Collecting audit messages is disabled. Jul 12 00:16:46.953948 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:16:46.953958 systemd-journald[237]: Journal started Jul 12 00:16:46.953977 systemd-journald[237]: Runtime Journal (/run/log/journal/7f50c510fbc1423e962900be3c50c8a2) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:16:46.960515 kernel: Bridge firewalling registered Jul 12 00:16:46.940946 systemd-modules-load[239]: Inserted module 'overlay' Jul 12 00:16:46.962578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:16:46.957977 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 12 00:16:46.965979 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:16:46.966450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:46.967676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:46.969705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:46.974637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:16:46.976437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:16:46.980099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:16:46.989304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:46.990735 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:47.003650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:16:47.004842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:47.007708 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:16:47.021076 dracut-cmdline[280]: dracut-dracut-053 Jul 12 00:16:47.023777 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:16:47.034465 systemd-resolved[276]: Positive Trust Anchors: Jul 12 00:16:47.034483 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:16:47.034515 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:16:47.039328 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 12 00:16:47.044263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:16:47.045528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:47.093433 kernel: SCSI subsystem initialized Jul 12 00:16:47.098398 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:16:47.106402 kernel: iscsi: registered transport (tcp) Jul 12 00:16:47.119772 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:16:47.119787 kernel: QLogic iSCSI HBA Driver Jul 12 00:16:47.165132 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:16:47.173646 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:16:47.192739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:16:47.192788 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:16:47.193898 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:16:47.257418 kernel: raid6: neonx8 gen() 15786 MB/s Jul 12 00:16:47.274413 kernel: raid6: neonx4 gen() 15626 MB/s Jul 12 00:16:47.291405 kernel: raid6: neonx2 gen() 13255 MB/s Jul 12 00:16:47.308403 kernel: raid6: neonx1 gen() 10466 MB/s Jul 12 00:16:47.325411 kernel: raid6: int64x8 gen() 6955 MB/s Jul 12 00:16:47.342401 kernel: raid6: int64x4 gen() 7341 MB/s Jul 12 00:16:47.359401 kernel: raid6: int64x2 gen() 6125 MB/s Jul 12 00:16:47.376507 kernel: raid6: int64x1 gen() 5052 MB/s Jul 12 00:16:47.376523 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Jul 12 00:16:47.394492 kernel: raid6: .... xor() 11894 MB/s, rmw enabled Jul 12 00:16:47.394509 kernel: raid6: using neon recovery algorithm Jul 12 00:16:47.399402 kernel: xor: measuring software checksum speed Jul 12 00:16:47.400691 kernel: 8regs : 17492 MB/sec Jul 12 00:16:47.400717 kernel: 32regs : 19097 MB/sec Jul 12 00:16:47.401940 kernel: arm64_neon : 26998 MB/sec Jul 12 00:16:47.401953 kernel: xor: using function: arm64_neon (26998 MB/sec) Jul 12 00:16:47.453412 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:16:47.470014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:16:47.481602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:47.495253 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 12 00:16:47.498456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:47.505584 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:16:47.520332 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jul 12 00:16:47.550484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:16:47.560592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:16:47.603587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:47.609572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:16:47.623897 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:16:47.625625 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:16:47.628784 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:47.630002 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:16:47.638564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:16:47.649036 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:16:47.657414 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:16:47.657564 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:16:47.661696 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:16:47.661726 kernel: GPT:9289727 != 19775487 Jul 12 00:16:47.661736 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:16:47.662786 kernel: GPT:9289727 != 19775487 Jul 12 00:16:47.663216 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:16:47.665359 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:16:47.665407 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:47.663338 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:47.667948 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:16:47.669104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:16:47.669615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:47.671701 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:47.678729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:47.684621 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (516) Jul 12 00:16:47.692465 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (508) Jul 12 00:16:47.693695 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:16:47.695286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:47.707999 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:16:47.712844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:16:47.716960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:16:47.718259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:16:47.740581 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:16:47.742435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:16:47.763437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:47.769920 disk-uuid[553]: Primary Header is updated. Jul 12 00:16:47.769920 disk-uuid[553]: Secondary Entries is updated. Jul 12 00:16:47.769920 disk-uuid[553]: Secondary Header is updated. Jul 12 00:16:47.777821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:48.787472 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:48.789159 disk-uuid[562]: The operation has completed successfully. Jul 12 00:16:48.812708 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:16:48.812845 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:16:48.835599 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:16:48.838998 sh[577]: Success Jul 12 00:16:48.854399 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:16:48.885223 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:16:48.897770 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:16:48.899722 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:16:48.911434 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:16:48.911486 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:48.912599 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:16:48.913418 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:16:48.914396 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:16:48.917884 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:16:48.919428 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:16:48.938608 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:16:48.940296 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:16:48.949427 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:48.949475 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:48.949491 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:16:48.953403 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:16:48.963491 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:16:48.965220 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:48.971464 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:16:48.979596 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:16:49.037135 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:16:49.046605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:16:49.076075 systemd-networkd[763]: lo: Link UP Jul 12 00:16:49.076086 systemd-networkd[763]: lo: Gained carrier Jul 12 00:16:49.076840 systemd-networkd[763]: Enumeration completed Jul 12 00:16:49.077115 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:16:49.077269 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:49.077272 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:16:49.078092 systemd-networkd[763]: eth0: Link UP Jul 12 00:16:49.078096 systemd-networkd[763]: eth0: Gained carrier Jul 12 00:16:49.078102 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:49.078603 systemd[1]: Reached target network.target - Network. Jul 12 00:16:49.091009 ignition[679]: Ignition 2.19.0 Jul 12 00:16:49.091016 ignition[679]: Stage: fetch-offline Jul 12 00:16:49.091064 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:49.091074 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:49.091225 ignition[679]: parsed url from cmdline: "" Jul 12 00:16:49.091229 ignition[679]: no config URL provided Jul 12 00:16:49.091233 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:16:49.091240 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:16:49.091265 ignition[679]: op(1): [started] loading QEMU firmware config module Jul 12 00:16:49.091270 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:16:49.100440 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:16:49.098254 ignition[679]: op(1): [finished] loading QEMU firmware config module Jul 12 00:16:49.098273 ignition[679]: QEMU firmware config was not found. Ignoring... Jul 12 00:16:49.139198 ignition[679]: parsing config with SHA512: 2448fc72d936771a953645b0145faf83e6740892dade3df2d527d9c0d32f73c88158f10f362154a55e8fe817e154d2a42aba3434fc0bbb861dd827e2ad25239f Jul 12 00:16:49.143252 unknown[679]: fetched base config from "system" Jul 12 00:16:49.143262 unknown[679]: fetched user config from "qemu" Jul 12 00:16:49.143823 ignition[679]: fetch-offline: fetch-offline passed Jul 12 00:16:49.143767 systemd-resolved[276]: Detected conflict on linux IN A 10.0.0.82 Jul 12 00:16:49.143891 ignition[679]: Ignition finished successfully Jul 12 00:16:49.143775 systemd-resolved[276]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jul 12 00:16:49.145431 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:16:49.148926 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:16:49.159577 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:16:49.169643 ignition[776]: Ignition 2.19.0 Jul 12 00:16:49.169653 ignition[776]: Stage: kargs Jul 12 00:16:49.169811 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:49.169820 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:49.170710 ignition[776]: kargs: kargs passed Jul 12 00:16:49.170754 ignition[776]: Ignition finished successfully Jul 12 00:16:49.174449 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:16:49.187569 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:16:49.197042 ignition[784]: Ignition 2.19.0 Jul 12 00:16:49.197061 ignition[784]: Stage: disks Jul 12 00:16:49.197230 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:49.197240 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:49.198088 ignition[784]: disks: disks passed Jul 12 00:16:49.198131 ignition[784]: Ignition finished successfully Jul 12 00:16:49.200447 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:16:49.202074 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:16:49.203491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:16:49.205368 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:16:49.207262 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:16:49.209210 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:16:49.221536 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:16:49.231131 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:16:49.235566 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:16:49.238335 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:16:49.282300 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:16:49.283840 kernel: EXT4-fs (vda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:16:49.283642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:16:49.301484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:16:49.303766 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:16:49.304772 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:16:49.304812 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:16:49.304833 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:16:49.310880 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:16:49.313126 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:16:49.318700 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (803) Jul 12 00:16:49.318723 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:49.318734 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:49.318743 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:16:49.320414 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:16:49.322115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:16:49.361593 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:16:49.366167 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:16:49.370757 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:16:49.374810 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:16:49.445456 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:16:49.459478 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:16:49.461957 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:16:49.467391 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:49.480224 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:16:49.484734 ignition[916]: INFO : Ignition 2.19.0 Jul 12 00:16:49.484734 ignition[916]: INFO : Stage: mount Jul 12 00:16:49.486337 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:49.486337 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:49.486337 ignition[916]: INFO : mount: mount passed Jul 12 00:16:49.486337 ignition[916]: INFO : Ignition finished successfully Jul 12 00:16:49.487818 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:16:49.502521 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:16:49.908647 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:16:49.918590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:16:49.925368 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (929) Jul 12 00:16:49.925417 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:49.925430 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:49.926967 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:16:49.934400 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:16:49.935648 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:16:49.959204 ignition[946]: INFO : Ignition 2.19.0 Jul 12 00:16:49.959204 ignition[946]: INFO : Stage: files Jul 12 00:16:49.961116 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:49.961116 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:49.961116 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:16:49.964567 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:16:49.964567 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:16:49.964567 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:16:49.964567 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:16:49.964567 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:16:49.963595 unknown[946]: wrote ssh authorized keys file for user: core Jul 12 00:16:49.972023 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:16:49.972023 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:16:50.449527 systemd-networkd[763]: eth0: Gained IPv6LL Jul 12 00:16:50.673737 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:16:51.821435 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:16:51.821435 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:16:51.825422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:16:52.338891 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:16:52.725752 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:16:52.725752 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 00:16:52.729454 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:16:52.767080 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:16:52.771857 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:16:52.774654 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:16:52.774654 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:16:52.774654 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:16:52.774654 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:16:52.774654 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:16:52.774654 ignition[946]: INFO : files: files passed Jul 12 00:16:52.774654 ignition[946]: INFO : Ignition finished successfully Jul 12 00:16:52.774838 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:16:52.785585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:16:52.789003 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:16:52.792233 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:16:52.793414 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:16:52.797119 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:16:52.802255 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:52.802255 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:52.805768 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:52.806962 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:16:52.808947 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:16:52.820547 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:16:52.846945 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:16:52.847093 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:16:52.849534 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:16:52.851473 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:16:52.853476 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:16:52.863564 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:16:52.876174 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:16:52.878875 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:16:52.890744 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:52.892025 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:52.894128 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:16:52.895895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:16:52.896030 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:16:52.898549 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:16:52.900527 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:16:52.902244 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:16:52.903993 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:16:52.905908 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:16:52.907893 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:16:52.909767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:16:52.911794 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:16:52.913870 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:16:52.915823 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:16:52.917360 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:16:52.917521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:16:52.919992 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:52.922143 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:52.924120 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:16:52.927442 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:52.928796 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:16:52.928924 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:16:52.931979 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:16:52.932114 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:16:52.934205 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:16:52.935867 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:16:52.940435 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:52.941819 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:16:52.944088 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:16:52.945737 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:16:52.945833 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:16:52.947511 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:16:52.947600 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:16:52.949256 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:16:52.949371 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:16:52.951256 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:16:52.951369 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:16:52.965584 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:16:52.966568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:16:52.966717 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:52.969421 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:16:52.970296 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:16:52.970447 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:52.972675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:16:52.972787 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:16:52.978104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:16:52.979929 ignition[1001]: INFO : Ignition 2.19.0 Jul 12 00:16:52.979929 ignition[1001]: INFO : Stage: umount Jul 12 00:16:52.979929 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:52.979929 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:52.979528 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:16:52.988975 ignition[1001]: INFO : umount: umount passed Jul 12 00:16:52.988975 ignition[1001]: INFO : Ignition finished successfully Jul 12 00:16:52.982276 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:16:52.982374 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:16:52.985020 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:16:52.985880 systemd[1]: Stopped target network.target - Network. Jul 12 00:16:52.987737 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:16:52.987814 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:16:52.990098 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:16:52.990150 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:16:52.991803 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:16:52.991846 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:16:52.993614 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:16:52.993659 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:16:52.995816 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:16:52.997718 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:16:53.010444 systemd-networkd[763]: eth0: DHCPv6 lease lost Jul 12 00:16:53.011349 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:16:53.011475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:16:53.013760 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:16:53.013865 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:16:53.016690 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:16:53.016741 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:53.024519 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:16:53.025477 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:16:53.025547 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:16:53.027591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:16:53.027640 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:53.029687 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:16:53.029735 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:53.031916 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:16:53.031964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:53.034121 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:53.044515 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:16:53.044632 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:16:53.051106 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:16:53.052153 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:53.054802 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:16:53.054978 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:16:53.056659 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:16:53.056725 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:53.058035 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:16:53.058091 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:53.059835 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:16:53.059893 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:16:53.062664 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:16:53.062713 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:16:53.065390 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:16:53.065440 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:53.068219 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:16:53.068263 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:16:53.081571 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:16:53.082648 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:16:53.082719 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:53.084926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:16:53.084975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:53.089213 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:16:53.089308 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:16:53.090770 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:16:53.093357 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:16:53.103067 systemd[1]: Switching root. Jul 12 00:16:53.128525 systemd-journald[237]: Journal stopped Jul 12 00:16:53.879155 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 12 00:16:53.879213 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:16:53.879226 kernel: SELinux: policy capability open_perms=1 Jul 12 00:16:53.879236 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:16:53.879247 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:16:53.879257 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:16:53.879267 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:16:53.879280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:16:53.879290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:16:53.879300 kernel: audit: type=1403 audit(1752279413.279:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:16:53.879311 systemd[1]: Successfully loaded SELinux policy in 33.303ms. Jul 12 00:16:53.879330 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.906ms. Jul 12 00:16:53.879342 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:16:53.879354 systemd[1]: Detected virtualization kvm. Jul 12 00:16:53.879365 systemd[1]: Detected architecture arm64. Jul 12 00:16:53.879375 systemd[1]: Detected first boot. Jul 12 00:16:53.879405 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:16:53.879431 zram_generator::config[1045]: No configuration found. Jul 12 00:16:53.879443 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:16:53.879454 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:16:53.879465 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:16:53.879479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:16:53.879491 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:16:53.879502 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:16:53.879514 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:16:53.879525 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:16:53.879536 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:16:53.879547 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:16:53.879558 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:16:53.879568 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:16:53.879579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:53.879590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:53.879604 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:16:53.879616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:16:53.879627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:16:53.879639 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:16:53.879650 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:16:53.879660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:53.879671 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:16:53.879682 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:16:53.879694 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:16:53.879706 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:16:53.879718 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:53.879728 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:16:53.879740 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:16:53.879750 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:16:53.879762 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:16:53.879772 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:16:53.879783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:53.879797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:53.879813 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:53.879825 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:16:53.879836 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:16:53.879848 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:16:53.879859 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:16:53.879870 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:16:53.879881 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:16:53.879891 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:16:53.879904 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:16:53.879914 systemd[1]: Reached target machines.target - Containers. Jul 12 00:16:53.879926 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:16:53.879937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:53.879947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:16:53.879960 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:16:53.879971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:53.879982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:16:53.879993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:53.880005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:16:53.880016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:53.880027 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:16:53.880044 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:16:53.880057 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:16:53.880067 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:16:53.880078 kernel: fuse: init (API version 7.39) Jul 12 00:16:53.880090 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:16:53.880102 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:16:53.880112 kernel: ACPI: bus type drm_connector registered Jul 12 00:16:53.880122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:16:53.880133 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:16:53.880143 kernel: loop: module loaded Jul 12 00:16:53.880154 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:16:53.880165 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:16:53.880176 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:16:53.880186 systemd[1]: Stopped verity-setup.service. Jul 12 00:16:53.880197 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:16:53.880209 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:16:53.880220 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:16:53.880231 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:16:53.880261 systemd-journald[1123]: Collecting audit messages is disabled. Jul 12 00:16:53.880286 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:16:53.880298 systemd-journald[1123]: Journal started Jul 12 00:16:53.880320 systemd-journald[1123]: Runtime Journal (/run/log/journal/7f50c510fbc1423e962900be3c50c8a2) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:16:53.880359 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:16:53.654213 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:16:53.670539 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:16:53.670927 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:16:53.883932 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:16:53.885431 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:16:53.886874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:53.888405 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:16:53.888588 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:16:53.890134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:53.890283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:53.891889 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:16:53.892053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:16:53.893430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:53.893586 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:53.895032 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:16:53.895214 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:16:53.896602 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:53.896747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:53.898124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:53.900438 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:16:53.901943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:16:53.915684 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:16:53.925487 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:16:53.927700 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:16:53.928840 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:16:53.928885 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:16:53.930915 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:16:53.933121 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:16:53.935260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:16:53.936340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:53.937654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:16:53.942558 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:16:53.943763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:16:53.947567 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:16:53.948785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:16:53.949634 systemd-journald[1123]: Time spent on flushing to /var/log/journal/7f50c510fbc1423e962900be3c50c8a2 is 17.330ms for 853 entries. Jul 12 00:16:53.949634 systemd-journald[1123]: System Journal (/var/log/journal/7f50c510fbc1423e962900be3c50c8a2) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:16:53.982527 systemd-journald[1123]: Received client request to flush runtime journal. Jul 12 00:16:53.949840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:16:53.955470 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:16:53.960161 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:16:53.963213 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:53.964941 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:16:53.966180 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:16:53.968794 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:16:53.972455 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:16:53.976246 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:16:53.986408 kernel: loop0: detected capacity change from 0 to 114328 Jul 12 00:16:53.986562 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:16:53.993625 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:16:53.995472 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:16:54.009061 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:54.009491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:16:54.013395 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:16:54.015349 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:16:54.015984 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:16:54.026618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:16:54.028257 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:16:54.035619 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:16:54.044290 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 12 00:16:54.044604 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 12 00:16:54.049289 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:54.085407 kernel: loop2: detected capacity change from 0 to 114432 Jul 12 00:16:54.123413 kernel: loop3: detected capacity change from 0 to 114328 Jul 12 00:16:54.129455 kernel: loop4: detected capacity change from 0 to 203944 Jul 12 00:16:54.135581 kernel: loop5: detected capacity change from 0 to 114432 Jul 12 00:16:54.139662 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:16:54.140054 (sd-merge)[1183]: Merged extensions into '/usr'. Jul 12 00:16:54.144628 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:16:54.144647 systemd[1]: Reloading... Jul 12 00:16:54.208411 zram_generator::config[1210]: No configuration found. Jul 12 00:16:54.244699 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:16:54.310067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:54.353959 systemd[1]: Reloading finished in 208 ms. Jul 12 00:16:54.385273 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:16:54.387791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:16:54.396671 systemd[1]: Starting ensure-sysext.service... Jul 12 00:16:54.398642 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:16:54.408825 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:16:54.408841 systemd[1]: Reloading... Jul 12 00:16:54.423944 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:16:54.424221 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:16:54.425020 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:16:54.425249 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 12 00:16:54.425298 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 12 00:16:54.427653 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:16:54.427667 systemd-tmpfiles[1245]: Skipping /boot Jul 12 00:16:54.434487 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:16:54.434502 systemd-tmpfiles[1245]: Skipping /boot Jul 12 00:16:54.458412 zram_generator::config[1269]: No configuration found. Jul 12 00:16:54.552351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:54.596547 systemd[1]: Reloading finished in 187 ms. Jul 12 00:16:54.612797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:16:54.625941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:54.632289 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:16:54.635004 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:16:54.637460 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:16:54.643642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:16:54.647999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:54.653967 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:16:54.660887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:54.662687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:54.666025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:54.671309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:54.672577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:54.674731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:16:54.677545 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:16:54.679583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:54.679724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:54.693650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:54.693810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:54.695715 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:54.695864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:54.696716 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jul 12 00:16:54.705707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:54.714787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:54.717636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:54.722932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:54.724115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:54.727350 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:16:54.729130 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:54.731204 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:16:54.733119 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:16:54.734818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:54.734948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:54.736720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:54.736868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:54.750404 augenrules[1355]: No rules Jul 12 00:16:54.753423 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:16:54.755576 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:54.755719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:54.759006 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:16:54.766477 systemd[1]: Finished ensure-sysext.service. Jul 12 00:16:54.772539 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:16:54.775023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:54.782571 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:54.787586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:16:54.790648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:54.791913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:54.797574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:16:54.801432 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:16:54.802979 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:16:54.803482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:54.804043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:54.805968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:54.806122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:54.808489 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:16:54.808633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:16:54.814627 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:16:54.822498 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:16:54.822569 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:16:54.858015 systemd-resolved[1312]: Positive Trust Anchors: Jul 12 00:16:54.861556 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:16:54.861683 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:16:54.863535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1338) Jul 12 00:16:54.877635 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jul 12 00:16:54.883643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:16:54.892281 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:54.902126 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:16:54.904892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:16:54.906216 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:16:54.913593 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:16:54.925359 systemd-networkd[1385]: lo: Link UP Jul 12 00:16:54.925367 systemd-networkd[1385]: lo: Gained carrier Jul 12 00:16:54.926766 systemd-networkd[1385]: Enumeration completed Jul 12 00:16:54.926911 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:16:54.927852 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:54.927867 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:16:54.929010 systemd-networkd[1385]: eth0: Link UP Jul 12 00:16:54.929021 systemd-networkd[1385]: eth0: Gained carrier Jul 12 00:16:54.929047 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:54.932689 systemd[1]: Reached target network.target - Network. Jul 12 00:16:54.943694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:16:54.945696 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:16:54.953053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:54.953439 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:16:54.955301 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jul 12 00:16:54.956179 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:16:54.956233 systemd-timesyncd[1386]: Initial clock synchronization to Sat 2025-07-12 00:16:54.682594 UTC. Jul 12 00:16:54.959067 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:16:54.964023 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:16:54.993777 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:16:55.013242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:55.027963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:16:55.029534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:55.030722 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:16:55.031886 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:16:55.033093 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:16:55.034465 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:16:55.035817 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:16:55.036999 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:16:55.038186 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:16:55.038227 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:16:55.039140 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:16:55.040837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:16:55.043423 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:16:55.056566 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:16:55.059009 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:16:55.060732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:16:55.062002 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:16:55.062973 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:16:55.063979 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:16:55.064008 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:16:55.065053 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:16:55.067095 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:16:55.067170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:16:55.070631 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:16:55.076618 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:16:55.084576 jq[1415]: false Jul 12 00:16:55.080252 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:16:55.081580 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:16:55.085634 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:16:55.090636 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:16:55.094774 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:16:55.095066 extend-filesystems[1416]: Found loop3 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found loop4 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found loop5 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda1 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda2 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda3 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found usr Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda4 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda6 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda7 Jul 12 00:16:55.096620 extend-filesystems[1416]: Found vda9 Jul 12 00:16:55.096620 extend-filesystems[1416]: Checking size of /dev/vda9 Jul 12 00:16:55.103268 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:16:55.108626 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:16:55.109093 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:16:55.112946 dbus-daemon[1414]: [system] SELinux support is enabled Jul 12 00:16:55.113567 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:16:55.116604 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:16:55.118288 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:16:55.121554 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:16:55.124792 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:16:55.124952 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:16:55.125241 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:16:55.126365 extend-filesystems[1416]: Resized partition /dev/vda9 Jul 12 00:16:55.126602 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:16:55.127326 jq[1433]: true Jul 12 00:16:55.129981 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:16:55.130159 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:16:55.135425 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1339) Jul 12 00:16:55.147634 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:16:55.148860 systemd-logind[1428]: New seat seat0. Jul 12 00:16:55.152036 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:16:55.161041 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:16:55.161219 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:16:55.162735 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:16:55.162852 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:16:55.164213 update_engine[1430]: I20250712 00:16:55.163964 1430 main.cc:92] Flatcar Update Engine starting Jul 12 00:16:55.165739 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:16:55.165732 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:16:55.173147 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:16:55.179533 update_engine[1430]: I20250712 00:16:55.179253 1430 update_check_scheduler.cc:74] Next update check in 4m39s Jul 12 00:16:55.179502 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:16:55.184595 jq[1440]: true Jul 12 00:16:55.187686 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:16:55.197426 tar[1439]: linux-arm64/helm Jul 12 00:16:55.216405 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:16:55.226766 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:16:55.227866 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:16:55.227866 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:16:55.227866 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:16:55.231242 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Jul 12 00:16:55.228844 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:16:55.230424 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:16:55.253084 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:16:55.255432 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:16:55.258160 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:16:55.386891 containerd[1448]: time="2025-07-12T00:16:55.386756399Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:16:55.417294 containerd[1448]: time="2025-07-12T00:16:55.417175261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:55.418790 containerd[1448]: time="2025-07-12T00:16:55.418730297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:55.418790 containerd[1448]: time="2025-07-12T00:16:55.418767770Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:16:55.418790 containerd[1448]: time="2025-07-12T00:16:55.418783610Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:16:55.418952 containerd[1448]: time="2025-07-12T00:16:55.418932461Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:16:55.418976 containerd[1448]: time="2025-07-12T00:16:55.418954134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419038 containerd[1448]: time="2025-07-12T00:16:55.419022861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419058 containerd[1448]: time="2025-07-12T00:16:55.419040168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419222 containerd[1448]: time="2025-07-12T00:16:55.419190101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419259 containerd[1448]: time="2025-07-12T00:16:55.419221162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419259 containerd[1448]: time="2025-07-12T00:16:55.419238006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419259 containerd[1448]: time="2025-07-12T00:16:55.419249170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419334 containerd[1448]: time="2025-07-12T00:16:55.419318632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419566 containerd[1448]: time="2025-07-12T00:16:55.419525972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419648 containerd[1448]: time="2025-07-12T00:16:55.419624408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:55.419648 containerd[1448]: time="2025-07-12T00:16:55.419643686Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:16:55.419729 containerd[1448]: time="2025-07-12T00:16:55.419714345Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:16:55.419808 containerd[1448]: time="2025-07-12T00:16:55.419757497Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:16:55.425267 containerd[1448]: time="2025-07-12T00:16:55.425231687Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:16:55.425338 containerd[1448]: time="2025-07-12T00:16:55.425284613Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:16:55.425338 containerd[1448]: time="2025-07-12T00:16:55.425300337Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:16:55.425338 containerd[1448]: time="2025-07-12T00:16:55.425315519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:16:55.425338 containerd[1448]: time="2025-07-12T00:16:55.425331513Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:16:55.425578 containerd[1448]: time="2025-07-12T00:16:55.425556393Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:16:55.425825 containerd[1448]: time="2025-07-12T00:16:55.425805766Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:16:55.425935 containerd[1448]: time="2025-07-12T00:16:55.425916139Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:16:55.425972 containerd[1448]: time="2025-07-12T00:16:55.425939859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:16:55.425972 containerd[1448]: time="2025-07-12T00:16:55.425953960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:16:55.426016 containerd[1448]: time="2025-07-12T00:16:55.425978608Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426016 containerd[1448]: time="2025-07-12T00:16:55.425994177Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426016 containerd[1448]: time="2025-07-12T00:16:55.426006578Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426075 containerd[1448]: time="2025-07-12T00:16:55.426020486Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426075 containerd[1448]: time="2025-07-12T00:16:55.426034664Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426075 containerd[1448]: time="2025-07-12T00:16:55.426046176Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426075 containerd[1448]: time="2025-07-12T00:16:55.426057920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426075 containerd[1448]: time="2025-07-12T00:16:55.426069047Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:16:55.426160 containerd[1448]: time="2025-07-12T00:16:55.426089213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426160 containerd[1448]: time="2025-07-12T00:16:55.426104511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426160 containerd[1448]: time="2025-07-12T00:16:55.426116333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426160 containerd[1448]: time="2025-07-12T00:16:55.426128850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426160 containerd[1448]: time="2025-07-12T00:16:55.426141019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426160 containerd[1448]: time="2025-07-12T00:16:55.426153459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426165010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426178454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426190739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426204376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426215580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426228097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426240034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426260 containerd[1448]: time="2025-07-12T00:16:55.426255371Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:16:55.426405 containerd[1448]: time="2025-07-12T00:16:55.426276156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426405 containerd[1448]: time="2025-07-12T00:16:55.426289368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426405 containerd[1448]: time="2025-07-12T00:16:55.426300340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:16:55.426717 containerd[1448]: time="2025-07-12T00:16:55.426484926Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:16:55.426717 containerd[1448]: time="2025-07-12T00:16:55.426510230Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:16:55.426717 containerd[1448]: time="2025-07-12T00:16:55.426523172Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:16:55.426717 containerd[1448]: time="2025-07-12T00:16:55.426707681Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:16:55.426717 containerd[1448]: time="2025-07-12T00:16:55.426719386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.426846 containerd[1448]: time="2025-07-12T00:16:55.426732869Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:16:55.426846 containerd[1448]: time="2025-07-12T00:16:55.426742991Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:16:55.426846 containerd[1448]: time="2025-07-12T00:16:55.426753615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:16:55.427124 containerd[1448]: time="2025-07-12T00:16:55.427040307Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:16:55.427124 containerd[1448]: time="2025-07-12T00:16:55.427100458Z" level=info msg="Connect containerd service" Jul 12 00:16:55.427124 containerd[1448]: time="2025-07-12T00:16:55.427124603Z" level=info msg="using legacy CRI server" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.427131673Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.427217398Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428117382Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428668899Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428705021Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428736390Z" level=info msg="Start subscribing containerd event" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428764476Z" level=info msg="Start recovering state" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428817480Z" level=info msg="Start event monitor" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428829379Z" level=info msg="Start snapshots syncer" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428837569Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:16:55.429954 containerd[1448]: time="2025-07-12T00:16:55.428844832Z" level=info msg="Start streaming server" Jul 12 00:16:55.429151 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:16:55.432294 containerd[1448]: time="2025-07-12T00:16:55.430891969Z" level=info msg="containerd successfully booted in 0.045530s" Jul 12 00:16:55.533534 tar[1439]: linux-arm64/LICENSE Jul 12 00:16:55.533734 tar[1439]: linux-arm64/README.md Jul 12 00:16:55.544818 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:16:55.628581 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:16:55.647939 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:16:55.654643 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:16:55.659933 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:16:55.660125 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:16:55.663161 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:16:55.674611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:16:55.677704 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:16:55.679915 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:16:55.681433 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:16:56.721512 systemd-networkd[1385]: eth0: Gained IPv6LL Jul 12 00:16:56.724552 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:16:56.726263 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:16:56.739637 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:16:56.742161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:56.744274 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:16:56.761192 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:16:56.761442 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:16:56.763117 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:16:56.764620 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:16:57.301143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:57.302845 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:16:57.305605 systemd[1]: Startup finished in 632ms (kernel) + 6.543s (initrd) + 4.063s (userspace) = 11.239s. Jul 12 00:16:57.306070 (kubelet)[1527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:16:57.735997 kubelet[1527]: E0712 00:16:57.735890 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:16:57.738523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:16:57.738683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:16:59.129900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:16:59.131182 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:40390.service - OpenSSH per-connection server daemon (10.0.0.1:40390). Jul 12 00:16:59.188984 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 40390 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.190879 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.202483 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:16:59.209617 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:16:59.212048 systemd-logind[1428]: New session 1 of user core. Jul 12 00:16:59.219861 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:16:59.222171 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:16:59.228654 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:16:59.300959 systemd[1544]: Queued start job for default target default.target. Jul 12 00:16:59.308369 systemd[1544]: Created slice app.slice - User Application Slice. Jul 12 00:16:59.308407 systemd[1544]: Reached target paths.target - Paths. Jul 12 00:16:59.308419 systemd[1544]: Reached target timers.target - Timers. Jul 12 00:16:59.309703 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:16:59.319263 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:16:59.319325 systemd[1544]: Reached target sockets.target - Sockets. Jul 12 00:16:59.319337 systemd[1544]: Reached target basic.target - Basic System. Jul 12 00:16:59.319396 systemd[1544]: Reached target default.target - Main User Target. Jul 12 00:16:59.319426 systemd[1544]: Startup finished in 85ms. Jul 12 00:16:59.319690 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:16:59.321329 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:16:59.378414 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:40402.service - OpenSSH per-connection server daemon (10.0.0.1:40402). Jul 12 00:16:59.414235 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 40402 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.415490 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.419498 systemd-logind[1428]: New session 2 of user core. Jul 12 00:16:59.429781 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:16:59.484889 sshd[1555]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.493458 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:40402.service: Deactivated successfully. Jul 12 00:16:59.495613 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:16:59.496697 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:16:59.497773 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:40412.service - OpenSSH per-connection server daemon (10.0.0.1:40412). Jul 12 00:16:59.498505 systemd-logind[1428]: Removed session 2. Jul 12 00:16:59.534866 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 40412 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.536058 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.539995 systemd-logind[1428]: New session 3 of user core. Jul 12 00:16:59.552539 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:16:59.600199 sshd[1562]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.608697 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:40412.service: Deactivated successfully. Jul 12 00:16:59.610083 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:16:59.612440 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:16:59.613535 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:40426.service - OpenSSH per-connection server daemon (10.0.0.1:40426). Jul 12 00:16:59.614250 systemd-logind[1428]: Removed session 3. Jul 12 00:16:59.649122 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 40426 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.650462 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.654230 systemd-logind[1428]: New session 4 of user core. Jul 12 00:16:59.667556 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:16:59.719406 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.734934 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:40426.service: Deactivated successfully. Jul 12 00:16:59.737874 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:16:59.739128 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:16:59.740253 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:40440.service - OpenSSH per-connection server daemon (10.0.0.1:40440). Jul 12 00:16:59.741088 systemd-logind[1428]: Removed session 4. Jul 12 00:16:59.775545 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 40440 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.776776 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.780786 systemd-logind[1428]: New session 5 of user core. Jul 12 00:16:59.795548 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:16:59.856424 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:16:59.856694 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:59.876115 sudo[1579]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:59.878172 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.889944 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:40440.service: Deactivated successfully. Jul 12 00:16:59.892762 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:16:59.893962 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:16:59.895242 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:40448.service - OpenSSH per-connection server daemon (10.0.0.1:40448). Jul 12 00:16:59.896128 systemd-logind[1428]: Removed session 5. Jul 12 00:16:59.931757 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 40448 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.933182 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.937452 systemd-logind[1428]: New session 6 of user core. Jul 12 00:16:59.946559 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:16:59.998751 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:16:59.999020 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:00.001954 sudo[1588]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:00.006126 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:17:00.006616 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:00.025767 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:17:00.027107 auditctl[1591]: No rules Jul 12 00:17:00.027900 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:17:00.028103 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:17:00.029745 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:17:00.053312 augenrules[1609]: No rules Jul 12 00:17:00.054503 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:17:00.055525 sudo[1587]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:00.057050 sshd[1584]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:00.068656 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:40448.service: Deactivated successfully. Jul 12 00:17:00.070620 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:17:00.073570 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:17:00.074736 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:40460.service - OpenSSH per-connection server daemon (10.0.0.1:40460). Jul 12 00:17:00.075444 systemd-logind[1428]: Removed session 6. Jul 12 00:17:00.112087 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 40460 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:00.113374 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:00.117442 systemd-logind[1428]: New session 7 of user core. Jul 12 00:17:00.130623 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:17:00.183229 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:17:00.184321 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:00.518668 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:17:00.518758 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:17:00.864480 dockerd[1638]: time="2025-07-12T00:17:00.863770797Z" level=info msg="Starting up" Jul 12 00:17:01.063927 dockerd[1638]: time="2025-07-12T00:17:01.063866536Z" level=info msg="Loading containers: start." Jul 12 00:17:01.154348 kernel: Initializing XFRM netlink socket Jul 12 00:17:01.214793 systemd-networkd[1385]: docker0: Link UP Jul 12 00:17:01.233596 dockerd[1638]: time="2025-07-12T00:17:01.233558669Z" level=info msg="Loading containers: done." Jul 12 00:17:01.245640 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2996872410-merged.mount: Deactivated successfully. Jul 12 00:17:01.247804 dockerd[1638]: time="2025-07-12T00:17:01.247753131Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:17:01.247885 dockerd[1638]: time="2025-07-12T00:17:01.247865185Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:17:01.248003 dockerd[1638]: time="2025-07-12T00:17:01.247974679Z" level=info msg="Daemon has completed initialization" Jul 12 00:17:01.276079 dockerd[1638]: time="2025-07-12T00:17:01.275907272Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:17:01.276383 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:17:01.886705 containerd[1448]: time="2025-07-12T00:17:01.886662787Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:17:02.522467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427713312.mount: Deactivated successfully. Jul 12 00:17:03.514163 containerd[1448]: time="2025-07-12T00:17:03.514101532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:03.514598 containerd[1448]: time="2025-07-12T00:17:03.514575024Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 12 00:17:03.515555 containerd[1448]: time="2025-07-12T00:17:03.515493229Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:03.518500 containerd[1448]: time="2025-07-12T00:17:03.518428183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:03.519756 containerd[1448]: time="2025-07-12T00:17:03.519588550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.632874908s" Jul 12 00:17:03.519756 containerd[1448]: time="2025-07-12T00:17:03.519631796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:17:03.523990 containerd[1448]: time="2025-07-12T00:17:03.523925123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:17:04.599222 containerd[1448]: time="2025-07-12T00:17:04.599173612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:04.600520 containerd[1448]: time="2025-07-12T00:17:04.600440376Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 12 00:17:04.601264 containerd[1448]: time="2025-07-12T00:17:04.601234806Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:04.604654 containerd[1448]: time="2025-07-12T00:17:04.604597919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:04.606540 containerd[1448]: time="2025-07-12T00:17:04.606102770Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.082078769s" Jul 12 00:17:04.606540 containerd[1448]: time="2025-07-12T00:17:04.606139192Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:17:04.607064 containerd[1448]: time="2025-07-12T00:17:04.607030179Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:17:05.683670 containerd[1448]: time="2025-07-12T00:17:05.683605562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:05.684317 containerd[1448]: time="2025-07-12T00:17:05.684273539Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 12 00:17:05.684911 containerd[1448]: time="2025-07-12T00:17:05.684874723Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:05.688345 containerd[1448]: time="2025-07-12T00:17:05.688301385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:05.690059 containerd[1448]: time="2025-07-12T00:17:05.690001635Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.082937359s" Jul 12 00:17:05.690059 containerd[1448]: time="2025-07-12T00:17:05.690056180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:17:05.690519 containerd[1448]: time="2025-07-12T00:17:05.690486197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:17:06.634415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280882815.mount: Deactivated successfully. Jul 12 00:17:06.883196 containerd[1448]: time="2025-07-12T00:17:06.883126878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:06.883635 containerd[1448]: time="2025-07-12T00:17:06.883588060Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 12 00:17:06.884642 containerd[1448]: time="2025-07-12T00:17:06.884422164Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:06.887312 containerd[1448]: time="2025-07-12T00:17:06.887268308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:06.887934 containerd[1448]: time="2025-07-12T00:17:06.887787351Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.197202202s" Jul 12 00:17:06.887934 containerd[1448]: time="2025-07-12T00:17:06.887817750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:17:06.888323 containerd[1448]: time="2025-07-12T00:17:06.888291155Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:17:07.491576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024030181.mount: Deactivated successfully. Jul 12 00:17:07.989355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:17:07.996609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:08.105163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:08.108118 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:17:08.146634 kubelet[1915]: E0712 00:17:08.146531 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:17:08.149409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:17:08.149574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:17:08.243164 containerd[1448]: time="2025-07-12T00:17:08.243021293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.244932 containerd[1448]: time="2025-07-12T00:17:08.244672206Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 00:17:08.245706 containerd[1448]: time="2025-07-12T00:17:08.245660574Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.249398 containerd[1448]: time="2025-07-12T00:17:08.249343410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.251347 containerd[1448]: time="2025-07-12T00:17:08.251304682Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.362976042s" Jul 12 00:17:08.251347 containerd[1448]: time="2025-07-12T00:17:08.251345435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:17:08.252010 containerd[1448]: time="2025-07-12T00:17:08.251855304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:17:08.665174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060461030.mount: Deactivated successfully. Jul 12 00:17:08.671281 containerd[1448]: time="2025-07-12T00:17:08.671228418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.671926 containerd[1448]: time="2025-07-12T00:17:08.671893983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:17:08.672458 containerd[1448]: time="2025-07-12T00:17:08.672431326Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.677041 containerd[1448]: time="2025-07-12T00:17:08.677005797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.677832 containerd[1448]: time="2025-07-12T00:17:08.677726389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 425.831962ms" Jul 12 00:17:08.677832 containerd[1448]: time="2025-07-12T00:17:08.677759548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:17:08.678450 containerd[1448]: time="2025-07-12T00:17:08.678347624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:17:09.163628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631052905.mount: Deactivated successfully. Jul 12 00:17:10.912829 containerd[1448]: time="2025-07-12T00:17:10.912777956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:10.913345 containerd[1448]: time="2025-07-12T00:17:10.913310050Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 12 00:17:10.914396 containerd[1448]: time="2025-07-12T00:17:10.914334502Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:10.918295 containerd[1448]: time="2025-07-12T00:17:10.918254854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:10.920312 containerd[1448]: time="2025-07-12T00:17:10.920269995Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.24187616s" Jul 12 00:17:10.920312 containerd[1448]: time="2025-07-12T00:17:10.920308576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:17:15.735634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:15.748596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:15.767117 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit session-7.scope)... Jul 12 00:17:15.767132 systemd[1]: Reloading... Jul 12 00:17:15.836901 zram_generator::config[2061]: No configuration found. Jul 12 00:17:15.974847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:16.040672 systemd[1]: Reloading finished in 273 ms. Jul 12 00:17:16.085532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:16.087540 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:16.089070 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:17:16.090421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:16.091748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:16.195237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:16.200208 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:17:16.238291 kubelet[2099]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:16.238291 kubelet[2099]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:17:16.238291 kubelet[2099]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:16.238628 kubelet[2099]: I0712 00:17:16.238402 2099 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:17:16.960403 kubelet[2099]: I0712 00:17:16.959359 2099 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:17:16.960403 kubelet[2099]: I0712 00:17:16.959401 2099 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:17:16.960403 kubelet[2099]: I0712 00:17:16.959634 2099 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:17:17.004016 kubelet[2099]: E0712 00:17:17.003979 2099 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:17.006153 kubelet[2099]: I0712 00:17:17.006125 2099 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:17.015312 kubelet[2099]: E0712 00:17:17.015265 2099 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:17:17.015312 kubelet[2099]: I0712 00:17:17.015301 2099 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:17:17.018722 kubelet[2099]: I0712 00:17:17.018689 2099 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:17:17.019482 kubelet[2099]: I0712 00:17:17.019453 2099 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:17:17.019647 kubelet[2099]: I0712 00:17:17.019611 2099 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:17:17.019829 kubelet[2099]: I0712 00:17:17.019643 2099 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:17:17.019907 kubelet[2099]: I0712 00:17:17.019887 2099 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:17:17.019907 kubelet[2099]: I0712 00:17:17.019897 2099 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:17:17.020149 kubelet[2099]: I0712 00:17:17.020126 2099 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:17.021973 kubelet[2099]: I0712 00:17:17.021945 2099 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:17:17.022003 kubelet[2099]: I0712 00:17:17.021976 2099 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:17:17.022025 kubelet[2099]: I0712 00:17:17.022003 2099 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:17:17.022096 kubelet[2099]: I0712 00:17:17.022079 2099 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:17:17.023705 kubelet[2099]: W0712 00:17:17.023639 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:17.023743 kubelet[2099]: E0712 00:17:17.023716 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:17.023743 kubelet[2099]: W0712 00:17:17.023650 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:17.023791 kubelet[2099]: E0712 00:17:17.023754 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:17.025949 kubelet[2099]: I0712 00:17:17.025924 2099 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:17:17.026830 kubelet[2099]: I0712 00:17:17.026806 2099 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:17:17.026965 kubelet[2099]: W0712 00:17:17.026946 2099 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:17:17.027941 kubelet[2099]: I0712 00:17:17.027921 2099 server.go:1274] "Started kubelet" Jul 12 00:17:17.028258 kubelet[2099]: I0712 00:17:17.028215 2099 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:17:17.028598 kubelet[2099]: I0712 00:17:17.028552 2099 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:17:17.029161 kubelet[2099]: I0712 00:17:17.028854 2099 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:17:17.030062 kubelet[2099]: I0712 00:17:17.030026 2099 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:17:17.030627 kubelet[2099]: I0712 00:17:17.030595 2099 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:17:17.030743 kubelet[2099]: I0712 00:17:17.030722 2099 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:17:17.031942 kubelet[2099]: I0712 00:17:17.031911 2099 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:17:17.032007 kubelet[2099]: I0712 00:17:17.032000 2099 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:17:17.032179 kubelet[2099]: I0712 00:17:17.032062 2099 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:17:17.032729 kubelet[2099]: W0712 00:17:17.032567 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:17.032729 kubelet[2099]: E0712 00:17:17.032627 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:17.032729 kubelet[2099]: E0712 00:17:17.032683 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Jul 12 00:17:17.032903 kubelet[2099]: E0712 00:17:17.032875 2099 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:17.034781 kubelet[2099]: E0712 00:17:17.034758 2099 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:17:17.034869 kubelet[2099]: I0712 00:17:17.034842 2099 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:17:17.034923 kubelet[2099]: I0712 00:17:17.034913 2099 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:17:17.035053 kubelet[2099]: I0712 00:17:17.035037 2099 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:17:17.035602 kubelet[2099]: E0712 00:17:17.033900 2099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185158dd2f44e437 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:17:17.027894327 +0000 UTC m=+0.824895580,LastTimestamp:2025-07-12 00:17:17.027894327 +0000 UTC m=+0.824895580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:17:17.044988 kubelet[2099]: I0712 00:17:17.044940 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:17:17.046271 kubelet[2099]: I0712 00:17:17.046236 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:17:17.047215 kubelet[2099]: I0712 00:17:17.046397 2099 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:17:17.047215 kubelet[2099]: I0712 00:17:17.046423 2099 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:17:17.047215 kubelet[2099]: E0712 00:17:17.046461 2099 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:17:17.047733 kubelet[2099]: W0712 00:17:17.047668 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:17.047801 kubelet[2099]: E0712 00:17:17.047738 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:17.050550 kubelet[2099]: I0712 00:17:17.050522 2099 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:17:17.050550 kubelet[2099]: I0712 00:17:17.050543 2099 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:17:17.050649 kubelet[2099]: I0712 00:17:17.050564 2099 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:17.131195 kubelet[2099]: I0712 00:17:17.131167 2099 policy_none.go:49] "None policy: Start" Jul 12 00:17:17.132104 kubelet[2099]: I0712 00:17:17.132079 2099 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:17:17.132104 kubelet[2099]: I0712 00:17:17.132106 2099 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:17:17.133215 kubelet[2099]: E0712 00:17:17.133191 2099 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:17.145353 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:17:17.147146 kubelet[2099]: E0712 00:17:17.147111 2099 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:17:17.160428 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:17:17.163247 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:17:17.174161 kubelet[2099]: I0712 00:17:17.174122 2099 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:17:17.174341 kubelet[2099]: I0712 00:17:17.174314 2099 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:17:17.174413 kubelet[2099]: I0712 00:17:17.174331 2099 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:17:17.174813 kubelet[2099]: I0712 00:17:17.174606 2099 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:17:17.175895 kubelet[2099]: E0712 00:17:17.175865 2099 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:17:17.233594 kubelet[2099]: E0712 00:17:17.233481 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Jul 12 00:17:17.275577 kubelet[2099]: I0712 00:17:17.275538 2099 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:17.276036 kubelet[2099]: E0712 00:17:17.276007 2099 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jul 12 00:17:17.355261 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 12 00:17:17.378367 systemd[1]: Created slice kubepods-burstable-pod5ed561b0ae372cf99eb4ccb4d9e82f38.slice - libcontainer container kubepods-burstable-pod5ed561b0ae372cf99eb4ccb4d9e82f38.slice. Jul 12 00:17:17.391746 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 12 00:17:17.477507 kubelet[2099]: I0712 00:17:17.477483 2099 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:17.477834 kubelet[2099]: E0712 00:17:17.477807 2099 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jul 12 00:17:17.533642 kubelet[2099]: I0712 00:17:17.533553 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:17.533642 kubelet[2099]: I0712 00:17:17.533585 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:17.533642 kubelet[2099]: I0712 00:17:17.533614 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:17.533642 kubelet[2099]: I0712 00:17:17.533642 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:17.533764 kubelet[2099]: I0712 00:17:17.533660 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ed561b0ae372cf99eb4ccb4d9e82f38-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ed561b0ae372cf99eb4ccb4d9e82f38\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:17.533764 kubelet[2099]: I0712 00:17:17.533674 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ed561b0ae372cf99eb4ccb4d9e82f38-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ed561b0ae372cf99eb4ccb4d9e82f38\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:17.533764 kubelet[2099]: I0712 00:17:17.533693 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ed561b0ae372cf99eb4ccb4d9e82f38-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ed561b0ae372cf99eb4ccb4d9e82f38\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:17.533764 kubelet[2099]: I0712 00:17:17.533710 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:17.533764 kubelet[2099]: I0712 00:17:17.533726 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:17.634206 kubelet[2099]: E0712 00:17:17.634153 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Jul 12 00:17:17.675633 kubelet[2099]: E0712 00:17:17.675595 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:17.676251 containerd[1448]: time="2025-07-12T00:17:17.676213367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:17.690561 kubelet[2099]: E0712 00:17:17.690522 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:17.690967 containerd[1448]: time="2025-07-12T00:17:17.690928103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ed561b0ae372cf99eb4ccb4d9e82f38,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:17.694491 kubelet[2099]: E0712 00:17:17.694400 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:17.694813 containerd[1448]: time="2025-07-12T00:17:17.694787779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:17.875102 kubelet[2099]: W0712 00:17:17.874969 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:17.875102 kubelet[2099]: E0712 00:17:17.875045 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:17.879192 kubelet[2099]: I0712 00:17:17.879160 2099 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:17.879484 kubelet[2099]: E0712 00:17:17.879440 2099 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jul 12 00:17:18.114866 kubelet[2099]: W0712 00:17:18.114798 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:18.114972 kubelet[2099]: E0712 00:17:18.114873 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:18.246706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2480868933.mount: Deactivated successfully. Jul 12 00:17:18.254087 containerd[1448]: time="2025-07-12T00:17:18.254010176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:18.255025 containerd[1448]: time="2025-07-12T00:17:18.254966977Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:18.255777 containerd[1448]: time="2025-07-12T00:17:18.255689629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 12 00:17:18.256837 containerd[1448]: time="2025-07-12T00:17:18.256524863Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:18.258456 containerd[1448]: time="2025-07-12T00:17:18.258420893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:17:18.259332 containerd[1448]: time="2025-07-12T00:17:18.259277973Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:18.260971 containerd[1448]: time="2025-07-12T00:17:18.260947043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:17:18.263322 containerd[1448]: time="2025-07-12T00:17:18.263280419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:18.264234 containerd[1448]: time="2025-07-12T00:17:18.264134742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.130016ms" Jul 12 00:17:18.266102 containerd[1448]: time="2025-07-12T00:17:18.265897104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.601644ms" Jul 12 00:17:18.270696 containerd[1448]: time="2025-07-12T00:17:18.270638976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.79354ms" Jul 12 00:17:18.332557 kubelet[2099]: W0712 00:17:18.332498 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:18.332557 kubelet[2099]: E0712 00:17:18.332552 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:18.378868 containerd[1448]: time="2025-07-12T00:17:18.378772346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:18.378868 containerd[1448]: time="2025-07-12T00:17:18.378827459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:18.378868 containerd[1448]: time="2025-07-12T00:17:18.378839160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:18.379460 containerd[1448]: time="2025-07-12T00:17:18.378948866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:18.379915 containerd[1448]: time="2025-07-12T00:17:18.379634537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:18.379915 containerd[1448]: time="2025-07-12T00:17:18.379687334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:18.379915 containerd[1448]: time="2025-07-12T00:17:18.379702110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:18.379915 containerd[1448]: time="2025-07-12T00:17:18.379781744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:18.380045 containerd[1448]: time="2025-07-12T00:17:18.379621239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:18.380045 containerd[1448]: time="2025-07-12T00:17:18.379667605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:18.380045 containerd[1448]: time="2025-07-12T00:17:18.379678907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:18.380045 containerd[1448]: time="2025-07-12T00:17:18.379739091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:18.403569 systemd[1]: Started cri-containerd-2fb67f3d9d6d225ef29a12d961a7b1b0105fb8b85f2ac6cbf055f7c3d80d8896.scope - libcontainer container 2fb67f3d9d6d225ef29a12d961a7b1b0105fb8b85f2ac6cbf055f7c3d80d8896. Jul 12 00:17:18.404765 systemd[1]: Started cri-containerd-a63135e974988f5fa5449b5359a357de276fc060b67f995aac46a32aee1587e7.scope - libcontainer container a63135e974988f5fa5449b5359a357de276fc060b67f995aac46a32aee1587e7. Jul 12 00:17:18.405822 systemd[1]: Started cri-containerd-b08bfe29d0b90afc5a190c7575cefc725f94c240b4476011142aca7377305d36.scope - libcontainer container b08bfe29d0b90afc5a190c7575cefc725f94c240b4476011142aca7377305d36. Jul 12 00:17:18.434930 kubelet[2099]: E0712 00:17:18.434874 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Jul 12 00:17:18.435347 containerd[1448]: time="2025-07-12T00:17:18.435075641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"b08bfe29d0b90afc5a190c7575cefc725f94c240b4476011142aca7377305d36\"" Jul 12 00:17:18.437754 kubelet[2099]: E0712 00:17:18.437680 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:18.440699 containerd[1448]: time="2025-07-12T00:17:18.440661333Z" level=info msg="CreateContainer within sandbox \"b08bfe29d0b90afc5a190c7575cefc725f94c240b4476011142aca7377305d36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:17:18.441846 containerd[1448]: time="2025-07-12T00:17:18.441803559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ed561b0ae372cf99eb4ccb4d9e82f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fb67f3d9d6d225ef29a12d961a7b1b0105fb8b85f2ac6cbf055f7c3d80d8896\"" Jul 12 00:17:18.444168 kubelet[2099]: E0712 00:17:18.444149 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:18.444650 kubelet[2099]: W0712 00:17:18.444604 2099 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 12 00:17:18.444698 kubelet[2099]: E0712 00:17:18.444661 2099 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:18.445169 containerd[1448]: time="2025-07-12T00:17:18.444899444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"a63135e974988f5fa5449b5359a357de276fc060b67f995aac46a32aee1587e7\"" Jul 12 00:17:18.445518 kubelet[2099]: E0712 00:17:18.445497 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:18.446460 containerd[1448]: time="2025-07-12T00:17:18.446176537Z" level=info msg="CreateContainer within sandbox \"2fb67f3d9d6d225ef29a12d961a7b1b0105fb8b85f2ac6cbf055f7c3d80d8896\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:17:18.447409 containerd[1448]: time="2025-07-12T00:17:18.447360897Z" level=info msg="CreateContainer within sandbox \"a63135e974988f5fa5449b5359a357de276fc060b67f995aac46a32aee1587e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:17:18.465939 containerd[1448]: time="2025-07-12T00:17:18.465898547Z" level=info msg="CreateContainer within sandbox \"b08bfe29d0b90afc5a190c7575cefc725f94c240b4476011142aca7377305d36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8922bcff6b6a54206beb6b5e34f88dda94c7b9295bbd55a24934e3292b7a3745\"" Jul 12 00:17:18.466614 containerd[1448]: time="2025-07-12T00:17:18.466590648Z" level=info msg="StartContainer for \"8922bcff6b6a54206beb6b5e34f88dda94c7b9295bbd55a24934e3292b7a3745\"" Jul 12 00:17:18.467296 containerd[1448]: time="2025-07-12T00:17:18.467234386Z" level=info msg="CreateContainer within sandbox \"2fb67f3d9d6d225ef29a12d961a7b1b0105fb8b85f2ac6cbf055f7c3d80d8896\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b29e4b5d07a011e0535ca7c6ff2d203cc36aceceefed53e32387a60a8788c40e\"" Jul 12 00:17:18.467620 containerd[1448]: time="2025-07-12T00:17:18.467595652Z" level=info msg="StartContainer for \"b29e4b5d07a011e0535ca7c6ff2d203cc36aceceefed53e32387a60a8788c40e\"" Jul 12 00:17:18.471075 containerd[1448]: time="2025-07-12T00:17:18.470952124Z" level=info msg="CreateContainer within sandbox \"a63135e974988f5fa5449b5359a357de276fc060b67f995aac46a32aee1587e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a309873f23334895f3a1f17420853da52a65e009f22f68fd37e6d0aadb4f3f1\"" Jul 12 00:17:18.471579 containerd[1448]: time="2025-07-12T00:17:18.471559280Z" level=info msg="StartContainer for \"4a309873f23334895f3a1f17420853da52a65e009f22f68fd37e6d0aadb4f3f1\"" Jul 12 00:17:18.491580 systemd[1]: Started cri-containerd-8922bcff6b6a54206beb6b5e34f88dda94c7b9295bbd55a24934e3292b7a3745.scope - libcontainer container 8922bcff6b6a54206beb6b5e34f88dda94c7b9295bbd55a24934e3292b7a3745. Jul 12 00:17:18.492657 systemd[1]: Started cri-containerd-b29e4b5d07a011e0535ca7c6ff2d203cc36aceceefed53e32387a60a8788c40e.scope - libcontainer container b29e4b5d07a011e0535ca7c6ff2d203cc36aceceefed53e32387a60a8788c40e. Jul 12 00:17:18.496123 systemd[1]: Started cri-containerd-4a309873f23334895f3a1f17420853da52a65e009f22f68fd37e6d0aadb4f3f1.scope - libcontainer container 4a309873f23334895f3a1f17420853da52a65e009f22f68fd37e6d0aadb4f3f1. Jul 12 00:17:18.527754 containerd[1448]: time="2025-07-12T00:17:18.527128420Z" level=info msg="StartContainer for \"b29e4b5d07a011e0535ca7c6ff2d203cc36aceceefed53e32387a60a8788c40e\" returns successfully" Jul 12 00:17:18.535879 containerd[1448]: time="2025-07-12T00:17:18.535815069Z" level=info msg="StartContainer for \"8922bcff6b6a54206beb6b5e34f88dda94c7b9295bbd55a24934e3292b7a3745\" returns successfully" Jul 12 00:17:18.561929 containerd[1448]: time="2025-07-12T00:17:18.561820623Z" level=info msg="StartContainer for \"4a309873f23334895f3a1f17420853da52a65e009f22f68fd37e6d0aadb4f3f1\" returns successfully" Jul 12 00:17:18.681565 kubelet[2099]: I0712 00:17:18.681350 2099 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:18.681908 kubelet[2099]: E0712 00:17:18.681852 2099 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jul 12 00:17:19.054825 kubelet[2099]: E0712 00:17:19.054798 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:19.062602 kubelet[2099]: E0712 00:17:19.062191 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:19.063183 kubelet[2099]: E0712 00:17:19.063162 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:20.065364 kubelet[2099]: E0712 00:17:20.065320 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:20.283484 kubelet[2099]: I0712 00:17:20.283451 2099 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:20.419838 kubelet[2099]: E0712 00:17:20.419727 2099 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:17:20.602305 kubelet[2099]: I0712 00:17:20.602241 2099 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:17:20.602305 kubelet[2099]: E0712 00:17:20.602298 2099 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:17:20.611667 kubelet[2099]: E0712 00:17:20.611628 2099 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:20.712042 kubelet[2099]: E0712 00:17:20.711925 2099 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:20.812699 kubelet[2099]: E0712 00:17:20.812657 2099 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:20.913181 kubelet[2099]: E0712 00:17:20.913131 2099 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:21.025665 kubelet[2099]: I0712 00:17:21.025544 2099 apiserver.go:52] "Watching apiserver" Jul 12 00:17:21.033020 kubelet[2099]: I0712 00:17:21.032990 2099 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:17:21.797187 kubelet[2099]: E0712 00:17:21.797119 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:22.067898 kubelet[2099]: E0712 00:17:22.067795 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:22.657645 systemd[1]: Reloading requested from client PID 2378 ('systemctl') (unit session-7.scope)... Jul 12 00:17:22.657950 systemd[1]: Reloading... Jul 12 00:17:22.744467 zram_generator::config[2420]: No configuration found. Jul 12 00:17:22.839237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:22.923902 systemd[1]: Reloading finished in 265 ms. Jul 12 00:17:22.966813 kubelet[2099]: I0712 00:17:22.966626 2099 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:22.966813 kubelet[2099]: E0712 00:17:22.966619 2099 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.185158dd2f44e437 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:17:17.027894327 +0000 UTC m=+0.824895580,LastTimestamp:2025-07-12 00:17:17.027894327 +0000 UTC m=+0.824895580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:17:22.966742 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:22.972304 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:17:22.973474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:22.973537 systemd[1]: kubelet.service: Consumed 1.195s CPU time, 129.9M memory peak, 0B memory swap peak. Jul 12 00:17:22.988641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:23.102303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:23.107238 (kubelet)[2459]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:17:23.152398 kubelet[2459]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:23.152870 kubelet[2459]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:17:23.152870 kubelet[2459]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:23.153146 kubelet[2459]: I0712 00:17:23.152959 2459 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:17:23.158677 kubelet[2459]: I0712 00:17:23.158637 2459 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:17:23.158677 kubelet[2459]: I0712 00:17:23.158668 2459 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:17:23.162481 kubelet[2459]: I0712 00:17:23.158899 2459 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:17:23.162481 kubelet[2459]: I0712 00:17:23.160315 2459 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:17:23.162481 kubelet[2459]: I0712 00:17:23.162333 2459 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:23.165439 kubelet[2459]: E0712 00:17:23.165397 2459 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:17:23.165439 kubelet[2459]: I0712 00:17:23.165434 2459 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:17:23.169684 kubelet[2459]: I0712 00:17:23.169647 2459 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:17:23.169854 kubelet[2459]: I0712 00:17:23.169834 2459 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:17:23.169972 kubelet[2459]: I0712 00:17:23.169942 2459 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:17:23.170134 kubelet[2459]: I0712 00:17:23.169968 2459 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:17:23.170210 kubelet[2459]: I0712 00:17:23.170138 2459 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:17:23.170210 kubelet[2459]: I0712 00:17:23.170147 2459 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:17:23.170210 kubelet[2459]: I0712 00:17:23.170179 2459 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:23.170287 kubelet[2459]: I0712 00:17:23.170278 2459 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:17:23.170311 kubelet[2459]: I0712 00:17:23.170291 2459 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:17:23.170311 kubelet[2459]: I0712 00:17:23.170310 2459 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:17:23.170350 kubelet[2459]: I0712 00:17:23.170322 2459 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:17:23.171405 kubelet[2459]: I0712 00:17:23.171300 2459 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:17:23.172068 kubelet[2459]: I0712 00:17:23.172044 2459 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:17:23.172501 kubelet[2459]: I0712 00:17:23.172483 2459 server.go:1274] "Started kubelet" Jul 12 00:17:23.173647 kubelet[2459]: I0712 00:17:23.173622 2459 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.175560 2459 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.175887 2459 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.176351 2459 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.176413 2459 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.176600 2459 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.176786 2459 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.176898 2459 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:17:23.180162 kubelet[2459]: I0712 00:17:23.177056 2459 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:17:23.180162 kubelet[2459]: E0712 00:17:23.177231 2459 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:23.185520 kubelet[2459]: I0712 00:17:23.185470 2459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:17:23.186778 kubelet[2459]: I0712 00:17:23.186309 2459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:17:23.186778 kubelet[2459]: I0712 00:17:23.186337 2459 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:17:23.186778 kubelet[2459]: I0712 00:17:23.186356 2459 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:17:23.186778 kubelet[2459]: E0712 00:17:23.186411 2459 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:17:23.197770 kubelet[2459]: I0712 00:17:23.197733 2459 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:17:23.198594 kubelet[2459]: I0712 00:17:23.197860 2459 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:17:23.199329 kubelet[2459]: E0712 00:17:23.199193 2459 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:17:23.199873 kubelet[2459]: I0712 00:17:23.199817 2459 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:17:23.229114 kubelet[2459]: I0712 00:17:23.229082 2459 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:17:23.229623 kubelet[2459]: I0712 00:17:23.229259 2459 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:17:23.229623 kubelet[2459]: I0712 00:17:23.229285 2459 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:23.229623 kubelet[2459]: I0712 00:17:23.229482 2459 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:17:23.229623 kubelet[2459]: I0712 00:17:23.229493 2459 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:17:23.229623 kubelet[2459]: I0712 00:17:23.229512 2459 policy_none.go:49] "None policy: Start" Jul 12 00:17:23.230119 kubelet[2459]: I0712 00:17:23.230098 2459 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:17:23.230163 kubelet[2459]: I0712 00:17:23.230128 2459 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:17:23.230268 kubelet[2459]: I0712 00:17:23.230254 2459 state_mem.go:75] "Updated machine memory state" Jul 12 00:17:23.234391 kubelet[2459]: I0712 00:17:23.234358 2459 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:17:23.234651 kubelet[2459]: I0712 00:17:23.234625 2459 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:17:23.234701 kubelet[2459]: I0712 00:17:23.234644 2459 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:17:23.234940 kubelet[2459]: I0712 00:17:23.234831 2459 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:17:23.293548 kubelet[2459]: E0712 00:17:23.293490 2459 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:23.338083 kubelet[2459]: I0712 00:17:23.338030 2459 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:23.346586 kubelet[2459]: I0712 00:17:23.346518 2459 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 00:17:23.346728 kubelet[2459]: I0712 00:17:23.346602 2459 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:17:23.378114 kubelet[2459]: I0712 00:17:23.378071 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ed561b0ae372cf99eb4ccb4d9e82f38-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ed561b0ae372cf99eb4ccb4d9e82f38\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:23.378114 kubelet[2459]: I0712 00:17:23.378110 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ed561b0ae372cf99eb4ccb4d9e82f38-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ed561b0ae372cf99eb4ccb4d9e82f38\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:23.378268 kubelet[2459]: I0712 00:17:23.378132 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:23.378268 kubelet[2459]: I0712 00:17:23.378159 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:23.378268 kubelet[2459]: I0712 00:17:23.378175 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ed561b0ae372cf99eb4ccb4d9e82f38-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ed561b0ae372cf99eb4ccb4d9e82f38\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:23.378268 kubelet[2459]: I0712 00:17:23.378191 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:23.378268 kubelet[2459]: I0712 00:17:23.378205 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:23.378415 kubelet[2459]: I0712 00:17:23.378229 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:23.378415 kubelet[2459]: I0712 00:17:23.378247 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:23.593274 kubelet[2459]: E0712 00:17:23.593224 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:23.593405 kubelet[2459]: E0712 00:17:23.593292 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:23.594357 kubelet[2459]: E0712 00:17:23.594335 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:24.171209 kubelet[2459]: I0712 00:17:24.171152 2459 apiserver.go:52] "Watching apiserver" Jul 12 00:17:24.177396 kubelet[2459]: I0712 00:17:24.177357 2459 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:17:24.212452 kubelet[2459]: E0712 00:17:24.212412 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:24.214139 kubelet[2459]: E0712 00:17:24.214066 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:24.222955 kubelet[2459]: E0712 00:17:24.222902 2459 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:24.223144 kubelet[2459]: E0712 00:17:24.223088 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:24.245525 kubelet[2459]: I0712 00:17:24.245455 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.245436393 podStartE2EDuration="3.245436393s" podCreationTimestamp="2025-07-12 00:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:24.235133088 +0000 UTC m=+1.124813904" watchObservedRunningTime="2025-07-12 00:17:24.245436393 +0000 UTC m=+1.135117169" Jul 12 00:17:24.246214 kubelet[2459]: I0712 00:17:24.245602 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.245597438 podStartE2EDuration="1.245597438s" podCreationTimestamp="2025-07-12 00:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:24.245393384 +0000 UTC m=+1.135074239" watchObservedRunningTime="2025-07-12 00:17:24.245597438 +0000 UTC m=+1.135278214" Jul 12 00:17:24.300081 kubelet[2459]: I0712 00:17:24.299986 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.299968213 podStartE2EDuration="1.299968213s" podCreationTimestamp="2025-07-12 00:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:24.277824377 +0000 UTC m=+1.167505193" watchObservedRunningTime="2025-07-12 00:17:24.299968213 +0000 UTC m=+1.189648989" Jul 12 00:17:25.213974 kubelet[2459]: E0712 00:17:25.213941 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:27.237833 kubelet[2459]: E0712 00:17:27.237804 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:27.244036 kubelet[2459]: I0712 00:17:27.243953 2459 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:17:27.244319 containerd[1448]: time="2025-07-12T00:17:27.244284654Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:17:27.244585 kubelet[2459]: I0712 00:17:27.244488 2459 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:17:27.621201 kubelet[2459]: E0712 00:17:27.621166 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:28.153105 systemd[1]: Created slice kubepods-besteffort-podb8537902_c81b_4dc6_a1af_db712bf5c72b.slice - libcontainer container kubepods-besteffort-podb8537902_c81b_4dc6_a1af_db712bf5c72b.slice. Jul 12 00:17:28.205005 kubelet[2459]: I0712 00:17:28.204957 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8537902-c81b-4dc6-a1af-db712bf5c72b-xtables-lock\") pod \"kube-proxy-zznwg\" (UID: \"b8537902-c81b-4dc6-a1af-db712bf5c72b\") " pod="kube-system/kube-proxy-zznwg" Jul 12 00:17:28.205005 kubelet[2459]: I0712 00:17:28.205004 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8537902-c81b-4dc6-a1af-db712bf5c72b-kube-proxy\") pod \"kube-proxy-zznwg\" (UID: \"b8537902-c81b-4dc6-a1af-db712bf5c72b\") " pod="kube-system/kube-proxy-zznwg" Jul 12 00:17:28.205182 kubelet[2459]: I0712 00:17:28.205027 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8537902-c81b-4dc6-a1af-db712bf5c72b-lib-modules\") pod \"kube-proxy-zznwg\" (UID: \"b8537902-c81b-4dc6-a1af-db712bf5c72b\") " pod="kube-system/kube-proxy-zznwg" Jul 12 00:17:28.205182 kubelet[2459]: I0712 00:17:28.205053 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrthr\" (UniqueName: \"kubernetes.io/projected/b8537902-c81b-4dc6-a1af-db712bf5c72b-kube-api-access-xrthr\") pod \"kube-proxy-zznwg\" (UID: \"b8537902-c81b-4dc6-a1af-db712bf5c72b\") " pod="kube-system/kube-proxy-zznwg" Jul 12 00:17:28.387744 systemd[1]: Created slice kubepods-besteffort-pod04029fe4_6e0f_45bd_807f_7cd0ecc4d74a.slice - libcontainer container kubepods-besteffort-pod04029fe4_6e0f_45bd_807f_7cd0ecc4d74a.slice. Jul 12 00:17:28.406779 kubelet[2459]: I0712 00:17:28.406637 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04029fe4-6e0f-45bd-807f-7cd0ecc4d74a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-pfz4t\" (UID: \"04029fe4-6e0f-45bd-807f-7cd0ecc4d74a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-pfz4t" Jul 12 00:17:28.406779 kubelet[2459]: I0712 00:17:28.406697 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7n5\" (UniqueName: \"kubernetes.io/projected/04029fe4-6e0f-45bd-807f-7cd0ecc4d74a-kube-api-access-vw7n5\") pod \"tigera-operator-5bf8dfcb4-pfz4t\" (UID: \"04029fe4-6e0f-45bd-807f-7cd0ecc4d74a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-pfz4t" Jul 12 00:17:28.470934 kubelet[2459]: E0712 00:17:28.470867 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:28.471865 containerd[1448]: time="2025-07-12T00:17:28.471783288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zznwg,Uid:b8537902-c81b-4dc6-a1af-db712bf5c72b,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:28.496851 containerd[1448]: time="2025-07-12T00:17:28.496763629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:28.496993 containerd[1448]: time="2025-07-12T00:17:28.496862679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:28.496993 containerd[1448]: time="2025-07-12T00:17:28.496889722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:28.496993 containerd[1448]: time="2025-07-12T00:17:28.496987812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:28.526649 systemd[1]: Started cri-containerd-211613c9e94ad99f22b4b277665cc92ab00b2435e18130a84c4b5537f804bbed.scope - libcontainer container 211613c9e94ad99f22b4b277665cc92ab00b2435e18130a84c4b5537f804bbed. Jul 12 00:17:28.550245 containerd[1448]: time="2025-07-12T00:17:28.550186858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zznwg,Uid:b8537902-c81b-4dc6-a1af-db712bf5c72b,Namespace:kube-system,Attempt:0,} returns sandbox id \"211613c9e94ad99f22b4b277665cc92ab00b2435e18130a84c4b5537f804bbed\"" Jul 12 00:17:28.551236 kubelet[2459]: E0712 00:17:28.551206 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:28.553855 containerd[1448]: time="2025-07-12T00:17:28.553817741Z" level=info msg="CreateContainer within sandbox \"211613c9e94ad99f22b4b277665cc92ab00b2435e18130a84c4b5537f804bbed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:17:28.571781 containerd[1448]: time="2025-07-12T00:17:28.571652007Z" level=info msg="CreateContainer within sandbox \"211613c9e94ad99f22b4b277665cc92ab00b2435e18130a84c4b5537f804bbed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05ab41d65fe4951294145eafd807d45e35ceb500d6ccea4c61e1a8beb3c21bb2\"" Jul 12 00:17:28.572556 containerd[1448]: time="2025-07-12T00:17:28.572522654Z" level=info msg="StartContainer for \"05ab41d65fe4951294145eafd807d45e35ceb500d6ccea4c61e1a8beb3c21bb2\"" Jul 12 00:17:28.597593 systemd[1]: Started cri-containerd-05ab41d65fe4951294145eafd807d45e35ceb500d6ccea4c61e1a8beb3c21bb2.scope - libcontainer container 05ab41d65fe4951294145eafd807d45e35ceb500d6ccea4c61e1a8beb3c21bb2. Jul 12 00:17:28.627586 containerd[1448]: time="2025-07-12T00:17:28.625519600Z" level=info msg="StartContainer for \"05ab41d65fe4951294145eafd807d45e35ceb500d6ccea4c61e1a8beb3c21bb2\" returns successfully" Jul 12 00:17:28.698168 containerd[1448]: time="2025-07-12T00:17:28.695214298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-pfz4t,Uid:04029fe4-6e0f-45bd-807f-7cd0ecc4d74a,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:17:28.716104 containerd[1448]: time="2025-07-12T00:17:28.715558095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:28.716104 containerd[1448]: time="2025-07-12T00:17:28.716084147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:28.716104 containerd[1448]: time="2025-07-12T00:17:28.716101309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:28.716279 containerd[1448]: time="2025-07-12T00:17:28.716211800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:28.738731 systemd[1]: Started cri-containerd-0b0ac601fa0d41dd854462ab985f540bc9f24ce1b10df1900f80a569221519b6.scope - libcontainer container 0b0ac601fa0d41dd854462ab985f540bc9f24ce1b10df1900f80a569221519b6. Jul 12 00:17:28.779738 containerd[1448]: time="2025-07-12T00:17:28.779673474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-pfz4t,Uid:04029fe4-6e0f-45bd-807f-7cd0ecc4d74a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b0ac601fa0d41dd854462ab985f540bc9f24ce1b10df1900f80a569221519b6\"" Jul 12 00:17:28.782309 containerd[1448]: time="2025-07-12T00:17:28.781352002Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:17:29.221599 kubelet[2459]: E0712 00:17:29.221545 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:29.803879 kubelet[2459]: E0712 00:17:29.803259 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:29.827052 kubelet[2459]: I0712 00:17:29.826964 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zznwg" podStartSLOduration=1.826945216 podStartE2EDuration="1.826945216s" podCreationTimestamp="2025-07-12 00:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:29.230278556 +0000 UTC m=+6.119959372" watchObservedRunningTime="2025-07-12 00:17:29.826945216 +0000 UTC m=+6.716626032" Jul 12 00:17:30.109463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223858591.mount: Deactivated successfully. Jul 12 00:17:30.222718 kubelet[2459]: E0712 00:17:30.222429 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.527750 containerd[1448]: time="2025-07-12T00:17:30.527706574Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:30.528786 containerd[1448]: time="2025-07-12T00:17:30.528757908Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:17:30.529800 containerd[1448]: time="2025-07-12T00:17:30.529770039Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:30.532180 containerd[1448]: time="2025-07-12T00:17:30.531755416Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:30.532857 containerd[1448]: time="2025-07-12T00:17:30.532729503Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.751343218s" Jul 12 00:17:30.532857 containerd[1448]: time="2025-07-12T00:17:30.532761306Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:17:30.535255 containerd[1448]: time="2025-07-12T00:17:30.535224206Z" level=info msg="CreateContainer within sandbox \"0b0ac601fa0d41dd854462ab985f540bc9f24ce1b10df1900f80a569221519b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:17:30.545997 containerd[1448]: time="2025-07-12T00:17:30.545962885Z" level=info msg="CreateContainer within sandbox \"0b0ac601fa0d41dd854462ab985f540bc9f24ce1b10df1900f80a569221519b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb33cbb748b12e651d3c48303aaab7807e7f21942576ba63d00edbcf6bcb7f33\"" Jul 12 00:17:30.546596 containerd[1448]: time="2025-07-12T00:17:30.546535977Z" level=info msg="StartContainer for \"eb33cbb748b12e651d3c48303aaab7807e7f21942576ba63d00edbcf6bcb7f33\"" Jul 12 00:17:30.547556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973744232.mount: Deactivated successfully. Jul 12 00:17:30.581618 systemd[1]: Started cri-containerd-eb33cbb748b12e651d3c48303aaab7807e7f21942576ba63d00edbcf6bcb7f33.scope - libcontainer container eb33cbb748b12e651d3c48303aaab7807e7f21942576ba63d00edbcf6bcb7f33. Jul 12 00:17:30.604422 containerd[1448]: time="2025-07-12T00:17:30.604373664Z" level=info msg="StartContainer for \"eb33cbb748b12e651d3c48303aaab7807e7f21942576ba63d00edbcf6bcb7f33\" returns successfully" Jul 12 00:17:36.174287 sudo[1620]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:36.188532 sshd[1617]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:36.192513 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:40460.service: Deactivated successfully. Jul 12 00:17:36.194176 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:17:36.194374 systemd[1]: session-7.scope: Consumed 6.908s CPU time, 152.5M memory peak, 0B memory swap peak. Jul 12 00:17:36.195027 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:17:36.195990 systemd-logind[1428]: Removed session 7. Jul 12 00:17:37.265667 kubelet[2459]: E0712 00:17:37.257478 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:37.310453 kubelet[2459]: I0712 00:17:37.309884 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-pfz4t" podStartSLOduration=7.557169259 podStartE2EDuration="9.309864524s" podCreationTimestamp="2025-07-12 00:17:28 +0000 UTC" firstStartedPulling="2025-07-12 00:17:28.780752262 +0000 UTC m=+5.670433078" lastFinishedPulling="2025-07-12 00:17:30.533447527 +0000 UTC m=+7.423128343" observedRunningTime="2025-07-12 00:17:31.233042661 +0000 UTC m=+8.122723477" watchObservedRunningTime="2025-07-12 00:17:37.309864524 +0000 UTC m=+14.199545380" Jul 12 00:17:37.630685 kubelet[2459]: E0712 00:17:37.630558 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.070950 update_engine[1430]: I20250712 00:17:40.070878 1430 update_attempter.cc:509] Updating boot flags... Jul 12 00:17:40.144421 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2879) Jul 12 00:17:40.192901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2883) Jul 12 00:17:41.343525 systemd[1]: Created slice kubepods-besteffort-podb3cebdbb_931c_4347_9ad6_b23f79117f2f.slice - libcontainer container kubepods-besteffort-podb3cebdbb_931c_4347_9ad6_b23f79117f2f.slice. Jul 12 00:17:41.499585 kubelet[2459]: I0712 00:17:41.499527 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3cebdbb-931c-4347-9ad6-b23f79117f2f-tigera-ca-bundle\") pod \"calico-typha-6d5d56f5bd-phwl9\" (UID: \"b3cebdbb-931c-4347-9ad6-b23f79117f2f\") " pod="calico-system/calico-typha-6d5d56f5bd-phwl9" Jul 12 00:17:41.499585 kubelet[2459]: I0712 00:17:41.499577 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhlj\" (UniqueName: \"kubernetes.io/projected/b3cebdbb-931c-4347-9ad6-b23f79117f2f-kube-api-access-dnhlj\") pod \"calico-typha-6d5d56f5bd-phwl9\" (UID: \"b3cebdbb-931c-4347-9ad6-b23f79117f2f\") " pod="calico-system/calico-typha-6d5d56f5bd-phwl9" Jul 12 00:17:41.499585 kubelet[2459]: I0712 00:17:41.499599 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3cebdbb-931c-4347-9ad6-b23f79117f2f-typha-certs\") pod \"calico-typha-6d5d56f5bd-phwl9\" (UID: \"b3cebdbb-931c-4347-9ad6-b23f79117f2f\") " pod="calico-system/calico-typha-6d5d56f5bd-phwl9" Jul 12 00:17:41.524137 systemd[1]: Created slice kubepods-besteffort-pod6d2f9022_505a_453d_817f_954d04901b64.slice - libcontainer container kubepods-besteffort-pod6d2f9022_505a_453d_817f_954d04901b64.slice. Jul 12 00:17:41.648680 kubelet[2459]: E0712 00:17:41.648565 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:41.649339 containerd[1448]: time="2025-07-12T00:17:41.648976794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5d56f5bd-phwl9,Uid:b3cebdbb-931c-4347-9ad6-b23f79117f2f,Namespace:calico-system,Attempt:0,}" Jul 12 00:17:41.672366 containerd[1448]: time="2025-07-12T00:17:41.672263029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:41.672366 containerd[1448]: time="2025-07-12T00:17:41.672341833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:41.672366 containerd[1448]: time="2025-07-12T00:17:41.672361874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:41.672817 containerd[1448]: time="2025-07-12T00:17:41.672745013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:41.700686 systemd[1]: Started cri-containerd-9a5113c0eed3e14f18c01ebf7f980939e7dc0d34b3bcb42a3386d63dbb714491.scope - libcontainer container 9a5113c0eed3e14f18c01ebf7f980939e7dc0d34b3bcb42a3386d63dbb714491. Jul 12 00:17:41.701871 kubelet[2459]: I0712 00:17:41.701453 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-cni-bin-dir\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.701871 kubelet[2459]: I0712 00:17:41.701497 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-cni-log-dir\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.701871 kubelet[2459]: I0712 00:17:41.701531 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh57n\" (UniqueName: \"kubernetes.io/projected/6d2f9022-505a-453d-817f-954d04901b64-kube-api-access-mh57n\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.701871 kubelet[2459]: I0712 00:17:41.701555 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-policysync\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.701871 kubelet[2459]: I0712 00:17:41.701575 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-cni-net-dir\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.702062 kubelet[2459]: I0712 00:17:41.701591 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-flexvol-driver-host\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.702062 kubelet[2459]: I0712 00:17:41.701607 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d2f9022-505a-453d-817f-954d04901b64-node-certs\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.702062 kubelet[2459]: I0712 00:17:41.701631 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-lib-modules\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.702062 kubelet[2459]: I0712 00:17:41.701673 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d2f9022-505a-453d-817f-954d04901b64-tigera-ca-bundle\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.702062 kubelet[2459]: I0712 00:17:41.701690 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-var-lib-calico\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.702169 kubelet[2459]: I0712 00:17:41.701948 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-var-run-calico\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.702169 kubelet[2459]: I0712 00:17:41.701986 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d2f9022-505a-453d-817f-954d04901b64-xtables-lock\") pod \"calico-node-l274m\" (UID: \"6d2f9022-505a-453d-817f-954d04901b64\") " pod="calico-system/calico-node-l274m" Jul 12 00:17:41.740311 kubelet[2459]: E0712 00:17:41.740225 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfgr2" podUID="015b368c-3c89-4707-af85-1b98a6fb48da" Jul 12 00:17:41.767729 containerd[1448]: time="2025-07-12T00:17:41.767685281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5d56f5bd-phwl9,Uid:b3cebdbb-931c-4347-9ad6-b23f79117f2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a5113c0eed3e14f18c01ebf7f980939e7dc0d34b3bcb42a3386d63dbb714491\"" Jul 12 00:17:41.769194 kubelet[2459]: E0712 00:17:41.768734 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:41.770507 containerd[1448]: time="2025-07-12T00:17:41.770474139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:17:41.827784 containerd[1448]: time="2025-07-12T00:17:41.827710737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l274m,Uid:6d2f9022-505a-453d-817f-954d04901b64,Namespace:calico-system,Attempt:0,}" Jul 12 00:17:41.861458 containerd[1448]: time="2025-07-12T00:17:41.861346765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:41.861458 containerd[1448]: time="2025-07-12T00:17:41.861420048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:41.861458 containerd[1448]: time="2025-07-12T00:17:41.861436569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:41.861656 containerd[1448]: time="2025-07-12T00:17:41.861513573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:41.878604 systemd[1]: Started cri-containerd-d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd.scope - libcontainer container d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd. Jul 12 00:17:41.904637 kubelet[2459]: E0712 00:17:41.904534 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.904637 kubelet[2459]: W0712 00:17:41.904559 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.904637 kubelet[2459]: E0712 00:17:41.904581 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.904637 kubelet[2459]: I0712 00:17:41.904610 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhb8d\" (UniqueName: \"kubernetes.io/projected/015b368c-3c89-4707-af85-1b98a6fb48da-kube-api-access-hhb8d\") pod \"csi-node-driver-wfgr2\" (UID: \"015b368c-3c89-4707-af85-1b98a6fb48da\") " pod="calico-system/csi-node-driver-wfgr2" Jul 12 00:17:41.905880 kubelet[2459]: E0712 00:17:41.905857 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.905880 kubelet[2459]: W0712 00:17:41.905876 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.905974 kubelet[2459]: E0712 00:17:41.905904 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.905974 kubelet[2459]: I0712 00:17:41.905923 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/015b368c-3c89-4707-af85-1b98a6fb48da-socket-dir\") pod \"csi-node-driver-wfgr2\" (UID: \"015b368c-3c89-4707-af85-1b98a6fb48da\") " pod="calico-system/csi-node-driver-wfgr2" Jul 12 00:17:41.906994 kubelet[2459]: E0712 00:17:41.906967 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.906994 kubelet[2459]: W0712 00:17:41.906986 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.907245 kubelet[2459]: E0712 00:17:41.907153 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.907245 kubelet[2459]: I0712 00:17:41.907180 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/015b368c-3c89-4707-af85-1b98a6fb48da-registration-dir\") pod \"csi-node-driver-wfgr2\" (UID: \"015b368c-3c89-4707-af85-1b98a6fb48da\") " pod="calico-system/csi-node-driver-wfgr2" Jul 12 00:17:41.907449 kubelet[2459]: E0712 00:17:41.907427 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.907535 kubelet[2459]: W0712 00:17:41.907511 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.907708 kubelet[2459]: E0712 00:17:41.907620 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.908355 kubelet[2459]: E0712 00:17:41.908339 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.908622 kubelet[2459]: W0712 00:17:41.908444 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.908622 kubelet[2459]: E0712 00:17:41.908513 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.909351 kubelet[2459]: E0712 00:17:41.909186 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.909351 kubelet[2459]: W0712 00:17:41.909209 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.909516 containerd[1448]: time="2025-07-12T00:17:41.909375666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l274m,Uid:6d2f9022-505a-453d-817f-954d04901b64,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd\"" Jul 12 00:17:41.909610 kubelet[2459]: E0712 00:17:41.909253 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.909646 kubelet[2459]: I0712 00:17:41.909629 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/015b368c-3c89-4707-af85-1b98a6fb48da-varrun\") pod \"csi-node-driver-wfgr2\" (UID: \"015b368c-3c89-4707-af85-1b98a6fb48da\") " pod="calico-system/csi-node-driver-wfgr2" Jul 12 00:17:41.909768 kubelet[2459]: E0712 00:17:41.909712 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.909768 kubelet[2459]: W0712 00:17:41.909727 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.909926 kubelet[2459]: E0712 00:17:41.909861 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.910066 kubelet[2459]: E0712 00:17:41.910046 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.910202 kubelet[2459]: W0712 00:17:41.910187 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.910297 kubelet[2459]: E0712 00:17:41.910283 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.911075 kubelet[2459]: E0712 00:17:41.911041 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.911075 kubelet[2459]: W0712 00:17:41.911061 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.911179 kubelet[2459]: E0712 00:17:41.911087 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.911853 kubelet[2459]: E0712 00:17:41.911274 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.911853 kubelet[2459]: W0712 00:17:41.911288 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.911853 kubelet[2459]: E0712 00:17:41.911303 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.911853 kubelet[2459]: I0712 00:17:41.911830 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/015b368c-3c89-4707-af85-1b98a6fb48da-kubelet-dir\") pod \"csi-node-driver-wfgr2\" (UID: \"015b368c-3c89-4707-af85-1b98a6fb48da\") " pod="calico-system/csi-node-driver-wfgr2" Jul 12 00:17:41.912474 kubelet[2459]: E0712 00:17:41.912445 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.912474 kubelet[2459]: W0712 00:17:41.912470 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.912592 kubelet[2459]: E0712 00:17:41.912489 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.913933 kubelet[2459]: E0712 00:17:41.913368 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.913933 kubelet[2459]: W0712 00:17:41.913862 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.914197 kubelet[2459]: E0712 00:17:41.914167 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.914360 kubelet[2459]: E0712 00:17:41.914304 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.914360 kubelet[2459]: W0712 00:17:41.914321 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.914623 kubelet[2459]: E0712 00:17:41.914335 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.914747 kubelet[2459]: E0712 00:17:41.914733 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.914811 kubelet[2459]: W0712 00:17:41.914799 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.914867 kubelet[2459]: E0712 00:17:41.914857 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:41.915090 kubelet[2459]: E0712 00:17:41.915076 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:41.915169 kubelet[2459]: W0712 00:17:41.915155 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:41.915225 kubelet[2459]: E0712 00:17:41.915214 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.014240 kubelet[2459]: E0712 00:17:42.014194 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.014240 kubelet[2459]: W0712 00:17:42.014219 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.014240 kubelet[2459]: E0712 00:17:42.014239 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.014738 kubelet[2459]: E0712 00:17:42.014512 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.014738 kubelet[2459]: W0712 00:17:42.014524 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.014738 kubelet[2459]: E0712 00:17:42.014550 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.015124 kubelet[2459]: E0712 00:17:42.015051 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.015124 kubelet[2459]: W0712 00:17:42.015068 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.015124 kubelet[2459]: E0712 00:17:42.015083 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.015595 kubelet[2459]: E0712 00:17:42.015302 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.015595 kubelet[2459]: W0712 00:17:42.015312 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.015595 kubelet[2459]: E0712 00:17:42.015325 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.015595 kubelet[2459]: E0712 00:17:42.015513 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.015595 kubelet[2459]: W0712 00:17:42.015522 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.015595 kubelet[2459]: E0712 00:17:42.015563 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.015795 kubelet[2459]: E0712 00:17:42.015747 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.015795 kubelet[2459]: W0712 00:17:42.015757 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.015895 kubelet[2459]: E0712 00:17:42.015795 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.016238 kubelet[2459]: E0712 00:17:42.016210 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.016238 kubelet[2459]: W0712 00:17:42.016225 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.016325 kubelet[2459]: E0712 00:17:42.016261 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.016660 kubelet[2459]: E0712 00:17:42.016624 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.016660 kubelet[2459]: W0712 00:17:42.016641 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.016751 kubelet[2459]: E0712 00:17:42.016677 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.016839 kubelet[2459]: E0712 00:17:42.016813 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.016839 kubelet[2459]: W0712 00:17:42.016828 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.016890 kubelet[2459]: E0712 00:17:42.016878 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.017582 kubelet[2459]: E0712 00:17:42.017545 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.017582 kubelet[2459]: W0712 00:17:42.017560 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.017692 kubelet[2459]: E0712 00:17:42.017611 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.019891 kubelet[2459]: E0712 00:17:42.019866 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.019891 kubelet[2459]: W0712 00:17:42.019884 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.020267 kubelet[2459]: E0712 00:17:42.019926 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.020267 kubelet[2459]: E0712 00:17:42.020109 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.020267 kubelet[2459]: W0712 00:17:42.020120 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.020267 kubelet[2459]: E0712 00:17:42.020159 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.020979 kubelet[2459]: E0712 00:17:42.020512 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.020979 kubelet[2459]: W0712 00:17:42.020525 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.020979 kubelet[2459]: E0712 00:17:42.020572 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.020979 kubelet[2459]: E0712 00:17:42.020724 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.020979 kubelet[2459]: W0712 00:17:42.020734 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.023215 kubelet[2459]: E0712 00:17:42.020766 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.023288 kubelet[2459]: E0712 00:17:42.020908 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.023288 kubelet[2459]: W0712 00:17:42.023247 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.023455 kubelet[2459]: E0712 00:17:42.023429 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.023817 kubelet[2459]: E0712 00:17:42.023783 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.023817 kubelet[2459]: W0712 00:17:42.023805 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.023958 kubelet[2459]: E0712 00:17:42.023935 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.024045 kubelet[2459]: E0712 00:17:42.024027 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.024045 kubelet[2459]: W0712 00:17:42.024043 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.024323 kubelet[2459]: E0712 00:17:42.024096 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.024385 kubelet[2459]: E0712 00:17:42.024357 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.024385 kubelet[2459]: W0712 00:17:42.024373 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.025473 kubelet[2459]: E0712 00:17:42.024436 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.025473 kubelet[2459]: E0712 00:17:42.024617 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.025473 kubelet[2459]: W0712 00:17:42.024627 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.025473 kubelet[2459]: E0712 00:17:42.024703 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.025473 kubelet[2459]: E0712 00:17:42.024802 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.025473 kubelet[2459]: W0712 00:17:42.024808 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.025473 kubelet[2459]: E0712 00:17:42.024846 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.025473 kubelet[2459]: E0712 00:17:42.025056 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.025473 kubelet[2459]: W0712 00:17:42.025065 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.025473 kubelet[2459]: E0712 00:17:42.025077 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.027643 kubelet[2459]: E0712 00:17:42.027606 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.027643 kubelet[2459]: W0712 00:17:42.027627 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.027744 kubelet[2459]: E0712 00:17:42.027651 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.027966 kubelet[2459]: E0712 00:17:42.027934 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.027966 kubelet[2459]: W0712 00:17:42.027948 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.027966 kubelet[2459]: E0712 00:17:42.027967 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.028425 kubelet[2459]: E0712 00:17:42.028149 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.028425 kubelet[2459]: W0712 00:17:42.028161 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.028425 kubelet[2459]: E0712 00:17:42.028179 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.028425 kubelet[2459]: E0712 00:17:42.028345 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.028425 kubelet[2459]: W0712 00:17:42.028353 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.028425 kubelet[2459]: E0712 00:17:42.028366 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.038718 kubelet[2459]: E0712 00:17:42.038676 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:42.038718 kubelet[2459]: W0712 00:17:42.038698 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:42.038718 kubelet[2459]: E0712 00:17:42.038717 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:42.745947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377233296.mount: Deactivated successfully. Jul 12 00:17:43.096079 containerd[1448]: time="2025-07-12T00:17:43.096032783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:43.096963 containerd[1448]: time="2025-07-12T00:17:43.096589488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:17:43.097450 containerd[1448]: time="2025-07-12T00:17:43.097424126Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:43.099249 containerd[1448]: time="2025-07-12T00:17:43.099216046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:43.100669 containerd[1448]: time="2025-07-12T00:17:43.100637910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.33012569s" Jul 12 00:17:43.100669 containerd[1448]: time="2025-07-12T00:17:43.100670032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:17:43.101741 containerd[1448]: time="2025-07-12T00:17:43.101696558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:17:43.122294 containerd[1448]: time="2025-07-12T00:17:43.122256641Z" level=info msg="CreateContainer within sandbox \"9a5113c0eed3e14f18c01ebf7f980939e7dc0d34b3bcb42a3386d63dbb714491\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:17:43.138349 containerd[1448]: time="2025-07-12T00:17:43.138306483Z" level=info msg="CreateContainer within sandbox \"9a5113c0eed3e14f18c01ebf7f980939e7dc0d34b3bcb42a3386d63dbb714491\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c54059642417f95db893a06267b975f41e458a6f4084911642b1d464242610a\"" Jul 12 00:17:43.140188 containerd[1448]: time="2025-07-12T00:17:43.139054116Z" level=info msg="StartContainer for \"7c54059642417f95db893a06267b975f41e458a6f4084911642b1d464242610a\"" Jul 12 00:17:43.163555 systemd[1]: Started cri-containerd-7c54059642417f95db893a06267b975f41e458a6f4084911642b1d464242610a.scope - libcontainer container 7c54059642417f95db893a06267b975f41e458a6f4084911642b1d464242610a. Jul 12 00:17:43.188002 kubelet[2459]: E0712 00:17:43.187657 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfgr2" podUID="015b368c-3c89-4707-af85-1b98a6fb48da" Jul 12 00:17:43.208326 containerd[1448]: time="2025-07-12T00:17:43.208269986Z" level=info msg="StartContainer for \"7c54059642417f95db893a06267b975f41e458a6f4084911642b1d464242610a\" returns successfully" Jul 12 00:17:43.284274 kubelet[2459]: E0712 00:17:43.284233 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:43.300732 kubelet[2459]: I0712 00:17:43.300605 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d5d56f5bd-phwl9" podStartSLOduration=0.968967933 podStartE2EDuration="2.300575493s" podCreationTimestamp="2025-07-12 00:17:41 +0000 UTC" firstStartedPulling="2025-07-12 00:17:41.769715181 +0000 UTC m=+18.659395957" lastFinishedPulling="2025-07-12 00:17:43.101322701 +0000 UTC m=+19.991003517" observedRunningTime="2025-07-12 00:17:43.298875137 +0000 UTC m=+20.188555953" watchObservedRunningTime="2025-07-12 00:17:43.300575493 +0000 UTC m=+20.190256269" Jul 12 00:17:43.316977 kubelet[2459]: E0712 00:17:43.316918 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.316977 kubelet[2459]: W0712 00:17:43.316948 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.316977 kubelet[2459]: E0712 00:17:43.316969 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.317689 kubelet[2459]: E0712 00:17:43.317536 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.317689 kubelet[2459]: W0712 00:17:43.317552 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.317689 kubelet[2459]: E0712 00:17:43.317565 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.317917 kubelet[2459]: E0712 00:17:43.317782 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.317917 kubelet[2459]: W0712 00:17:43.317794 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.317917 kubelet[2459]: E0712 00:17:43.317803 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.318086 kubelet[2459]: E0712 00:17:43.318052 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.318086 kubelet[2459]: W0712 00:17:43.318064 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.318086 kubelet[2459]: E0712 00:17:43.318076 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.318420 kubelet[2459]: E0712 00:17:43.318356 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.318420 kubelet[2459]: W0712 00:17:43.318370 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.318420 kubelet[2459]: E0712 00:17:43.318390 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.318786 kubelet[2459]: E0712 00:17:43.318725 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.318786 kubelet[2459]: W0712 00:17:43.318740 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.318786 kubelet[2459]: E0712 00:17:43.318750 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.319131 kubelet[2459]: E0712 00:17:43.318951 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.319131 kubelet[2459]: W0712 00:17:43.318964 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.319131 kubelet[2459]: E0712 00:17:43.318973 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.319256 kubelet[2459]: E0712 00:17:43.319237 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.319256 kubelet[2459]: W0712 00:17:43.319251 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.319321 kubelet[2459]: E0712 00:17:43.319260 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.319602 kubelet[2459]: E0712 00:17:43.319577 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.319602 kubelet[2459]: W0712 00:17:43.319599 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.319741 kubelet[2459]: E0712 00:17:43.319610 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.320606 kubelet[2459]: E0712 00:17:43.320534 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.320606 kubelet[2459]: W0712 00:17:43.320546 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.320606 kubelet[2459]: E0712 00:17:43.320555 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.320916 kubelet[2459]: E0712 00:17:43.320816 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.320916 kubelet[2459]: W0712 00:17:43.320831 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.320916 kubelet[2459]: E0712 00:17:43.320839 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.321125 kubelet[2459]: E0712 00:17:43.321084 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.321125 kubelet[2459]: W0712 00:17:43.321093 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.321125 kubelet[2459]: E0712 00:17:43.321101 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.321445 kubelet[2459]: E0712 00:17:43.321360 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.321445 kubelet[2459]: W0712 00:17:43.321373 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.321445 kubelet[2459]: E0712 00:17:43.321400 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.322701 kubelet[2459]: E0712 00:17:43.322455 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.322701 kubelet[2459]: W0712 00:17:43.322468 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.322701 kubelet[2459]: E0712 00:17:43.322478 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.322701 kubelet[2459]: E0712 00:17:43.322702 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.322823 kubelet[2459]: W0712 00:17:43.322711 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.322823 kubelet[2459]: E0712 00:17:43.322720 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.326071 kubelet[2459]: E0712 00:17:43.326044 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.326538 kubelet[2459]: W0712 00:17:43.326429 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.326538 kubelet[2459]: E0712 00:17:43.326451 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.327311 kubelet[2459]: E0712 00:17:43.327179 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.327311 kubelet[2459]: W0712 00:17:43.327195 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.327311 kubelet[2459]: E0712 00:17:43.327209 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.327628 kubelet[2459]: E0712 00:17:43.327580 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.327628 kubelet[2459]: W0712 00:17:43.327606 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.327825 kubelet[2459]: E0712 00:17:43.327811 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.328092 kubelet[2459]: E0712 00:17:43.327979 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.328092 kubelet[2459]: W0712 00:17:43.327993 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.328092 kubelet[2459]: E0712 00:17:43.328004 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.328249 kubelet[2459]: E0712 00:17:43.328236 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.328409 kubelet[2459]: W0712 00:17:43.328295 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.328409 kubelet[2459]: E0712 00:17:43.328310 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.328547 kubelet[2459]: E0712 00:17:43.328535 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.328616 kubelet[2459]: W0712 00:17:43.328599 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.328764 kubelet[2459]: E0712 00:17:43.328728 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.328883 kubelet[2459]: E0712 00:17:43.328870 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.328973 kubelet[2459]: W0712 00:17:43.328934 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.328973 kubelet[2459]: E0712 00:17:43.328962 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.329283 kubelet[2459]: E0712 00:17:43.329243 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.329283 kubelet[2459]: W0712 00:17:43.329254 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.329283 kubelet[2459]: E0712 00:17:43.329278 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.329785 kubelet[2459]: E0712 00:17:43.329691 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.329785 kubelet[2459]: W0712 00:17:43.329705 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.329785 kubelet[2459]: E0712 00:17:43.329723 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.330407 kubelet[2459]: E0712 00:17:43.330275 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.330407 kubelet[2459]: W0712 00:17:43.330289 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.330407 kubelet[2459]: E0712 00:17:43.330303 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.330602 kubelet[2459]: E0712 00:17:43.330567 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.330602 kubelet[2459]: W0712 00:17:43.330579 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.330812 kubelet[2459]: E0712 00:17:43.330731 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.330922 kubelet[2459]: E0712 00:17:43.330911 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.330974 kubelet[2459]: W0712 00:17:43.330963 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.331131 kubelet[2459]: E0712 00:17:43.331105 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.331454 kubelet[2459]: E0712 00:17:43.331277 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.331454 kubelet[2459]: W0712 00:17:43.331287 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.331454 kubelet[2459]: E0712 00:17:43.331299 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.331643 kubelet[2459]: E0712 00:17:43.331629 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.331712 kubelet[2459]: W0712 00:17:43.331700 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.331785 kubelet[2459]: E0712 00:17:43.331774 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.332177 kubelet[2459]: E0712 00:17:43.332165 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.332330 kubelet[2459]: W0712 00:17:43.332234 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.332330 kubelet[2459]: E0712 00:17:43.332254 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.332648 kubelet[2459]: E0712 00:17:43.332634 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.332795 kubelet[2459]: W0712 00:17:43.332709 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.332795 kubelet[2459]: E0712 00:17:43.332735 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.333192 kubelet[2459]: E0712 00:17:43.333179 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.333270 kubelet[2459]: W0712 00:17:43.333256 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.333447 kubelet[2459]: E0712 00:17:43.333323 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:43.333779 kubelet[2459]: E0712 00:17:43.333764 2459 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:17:43.333855 kubelet[2459]: W0712 00:17:43.333843 2459 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:17:43.333935 kubelet[2459]: E0712 00:17:43.333905 2459 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:17:44.116989 containerd[1448]: time="2025-07-12T00:17:44.116902405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:44.118107 containerd[1448]: time="2025-07-12T00:17:44.117397067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:17:44.118459 containerd[1448]: time="2025-07-12T00:17:44.118437751Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:44.120531 containerd[1448]: time="2025-07-12T00:17:44.120468518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:44.121266 containerd[1448]: time="2025-07-12T00:17:44.121230991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.019496031s" Jul 12 00:17:44.121266 containerd[1448]: time="2025-07-12T00:17:44.121262112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:17:44.124238 containerd[1448]: time="2025-07-12T00:17:44.124206078Z" level=info msg="CreateContainer within sandbox \"d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:17:44.136742 containerd[1448]: time="2025-07-12T00:17:44.136662091Z" level=info msg="CreateContainer within sandbox \"d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2\"" Jul 12 00:17:44.137547 containerd[1448]: time="2025-07-12T00:17:44.137498887Z" level=info msg="StartContainer for \"b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2\"" Jul 12 00:17:44.175575 systemd[1]: Started cri-containerd-b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2.scope - libcontainer container b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2. Jul 12 00:17:44.201882 containerd[1448]: time="2025-07-12T00:17:44.201839482Z" level=info msg="StartContainer for \"b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2\" returns successfully" Jul 12 00:17:44.241608 systemd[1]: cri-containerd-b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2.scope: Deactivated successfully. Jul 12 00:17:44.290421 kubelet[2459]: I0712 00:17:44.290204 2459 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:17:44.290937 kubelet[2459]: E0712 00:17:44.290777 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:44.363123 containerd[1448]: time="2025-07-12T00:17:44.344604995Z" level=info msg="shim disconnected" id=b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2 namespace=k8s.io Jul 12 00:17:44.363123 containerd[1448]: time="2025-07-12T00:17:44.362962421Z" level=warning msg="cleaning up after shim disconnected" id=b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2 namespace=k8s.io Jul 12 00:17:44.363123 containerd[1448]: time="2025-07-12T00:17:44.362978701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:44.609546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b39223e895628faf7c4bacfe229f77b68a34f5aa5514303b6602770dd0bf74e2-rootfs.mount: Deactivated successfully. Jul 12 00:17:45.186942 kubelet[2459]: E0712 00:17:45.186646 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfgr2" podUID="015b368c-3c89-4707-af85-1b98a6fb48da" Jul 12 00:17:45.305209 containerd[1448]: time="2025-07-12T00:17:45.305109078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:17:47.150728 containerd[1448]: time="2025-07-12T00:17:47.150667248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:47.151861 containerd[1448]: time="2025-07-12T00:17:47.151648965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:17:47.152800 containerd[1448]: time="2025-07-12T00:17:47.152724125Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:47.154770 containerd[1448]: time="2025-07-12T00:17:47.154717479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:47.156116 containerd[1448]: time="2025-07-12T00:17:47.155712596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.850561637s" Jul 12 00:17:47.156116 containerd[1448]: time="2025-07-12T00:17:47.155753878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:17:47.158524 containerd[1448]: time="2025-07-12T00:17:47.158489340Z" level=info msg="CreateContainer within sandbox \"d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:17:47.181609 containerd[1448]: time="2025-07-12T00:17:47.181514997Z" level=info msg="CreateContainer within sandbox \"d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609\"" Jul 12 00:17:47.182177 containerd[1448]: time="2025-07-12T00:17:47.182059777Z" level=info msg="StartContainer for \"c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609\"" Jul 12 00:17:47.187667 kubelet[2459]: E0712 00:17:47.187613 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wfgr2" podUID="015b368c-3c89-4707-af85-1b98a6fb48da" Jul 12 00:17:47.218779 systemd[1]: Started cri-containerd-c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609.scope - libcontainer container c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609. Jul 12 00:17:47.542244 containerd[1448]: time="2025-07-12T00:17:47.542178787Z" level=info msg="StartContainer for \"c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609\" returns successfully" Jul 12 00:17:47.959395 systemd[1]: cri-containerd-c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609.scope: Deactivated successfully. Jul 12 00:17:47.983400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609-rootfs.mount: Deactivated successfully. Jul 12 00:17:47.991258 containerd[1448]: time="2025-07-12T00:17:47.991196908Z" level=info msg="shim disconnected" id=c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609 namespace=k8s.io Jul 12 00:17:47.991258 containerd[1448]: time="2025-07-12T00:17:47.991253190Z" level=warning msg="cleaning up after shim disconnected" id=c428a1c67bd68c6591b19e91e9cf7fa45d36c8b5f5b4114b9e4fe08ab8bd8609 namespace=k8s.io Jul 12 00:17:47.991258 containerd[1448]: time="2025-07-12T00:17:47.991262430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:48.056085 kubelet[2459]: I0712 00:17:48.056057 2459 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:17:48.128287 systemd[1]: Created slice kubepods-besteffort-pod7dcf09fb_a512_439e_938a_bfe4c44b49b4.slice - libcontainer container kubepods-besteffort-pod7dcf09fb_a512_439e_938a_bfe4c44b49b4.slice. Jul 12 00:17:48.141297 systemd[1]: Created slice kubepods-burstable-podc50cf70e_483e_49bb_a8b2_b017faf73702.slice - libcontainer container kubepods-burstable-podc50cf70e_483e_49bb_a8b2_b017faf73702.slice. Jul 12 00:17:48.153816 systemd[1]: Created slice kubepods-burstable-pod74f3a898_fc16_41f9_a59d_febcf1761d1e.slice - libcontainer container kubepods-burstable-pod74f3a898_fc16_41f9_a59d_febcf1761d1e.slice. Jul 12 00:17:48.171624 systemd[1]: Created slice kubepods-besteffort-pod26782363_597c_473e_9a7b_6c89373057d1.slice - libcontainer container kubepods-besteffort-pod26782363_597c_473e_9a7b_6c89373057d1.slice. Jul 12 00:17:48.179481 systemd[1]: Created slice kubepods-besteffort-pod1ca0963e_ae80_4ed4_8ecd_1417da594c22.slice - libcontainer container kubepods-besteffort-pod1ca0963e_ae80_4ed4_8ecd_1417da594c22.slice. Jul 12 00:17:48.185394 systemd[1]: Created slice kubepods-besteffort-pod7c756f58_3e4d_4cb6_8d6f_41a98b929020.slice - libcontainer container kubepods-besteffort-pod7c756f58_3e4d_4cb6_8d6f_41a98b929020.slice. Jul 12 00:17:48.191219 systemd[1]: Created slice kubepods-besteffort-pod81143345_0e90_4512_9092_036342474e19.slice - libcontainer container kubepods-besteffort-pod81143345_0e90_4512_9092_036342474e19.slice. Jul 12 00:17:48.279298 kubelet[2459]: I0712 00:17:48.279252 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c50cf70e-483e-49bb-a8b2-b017faf73702-config-volume\") pod \"coredns-7c65d6cfc9-knnq4\" (UID: \"c50cf70e-483e-49bb-a8b2-b017faf73702\") " pod="kube-system/coredns-7c65d6cfc9-knnq4" Jul 12 00:17:48.279894 kubelet[2459]: I0712 00:17:48.279308 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7cq8\" (UniqueName: \"kubernetes.io/projected/81143345-0e90-4512-9092-036342474e19-kube-api-access-c7cq8\") pod \"calico-apiserver-6bf689cfcd-j9lf2\" (UID: \"81143345-0e90-4512-9092-036342474e19\") " pod="calico-apiserver/calico-apiserver-6bf689cfcd-j9lf2" Jul 12 00:17:48.279894 kubelet[2459]: I0712 00:17:48.279333 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbcmc\" (UniqueName: \"kubernetes.io/projected/26782363-597c-473e-9a7b-6c89373057d1-kube-api-access-fbcmc\") pod \"goldmane-58fd7646b9-jmdtm\" (UID: \"26782363-597c-473e-9a7b-6c89373057d1\") " pod="calico-system/goldmane-58fd7646b9-jmdtm" Jul 12 00:17:48.279894 kubelet[2459]: I0712 00:17:48.279350 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-backend-key-pair\") pod \"whisker-69966d7884-bjkn9\" (UID: \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\") " pod="calico-system/whisker-69966d7884-bjkn9" Jul 12 00:17:48.279894 kubelet[2459]: I0712 00:17:48.279368 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26782363-597c-473e-9a7b-6c89373057d1-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-jmdtm\" (UID: \"26782363-597c-473e-9a7b-6c89373057d1\") " pod="calico-system/goldmane-58fd7646b9-jmdtm" Jul 12 00:17:48.279894 kubelet[2459]: I0712 00:17:48.279407 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8bt\" (UniqueName: \"kubernetes.io/projected/7c756f58-3e4d-4cb6-8d6f-41a98b929020-kube-api-access-qf8bt\") pod \"whisker-69966d7884-bjkn9\" (UID: \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\") " pod="calico-system/whisker-69966d7884-bjkn9" Jul 12 00:17:48.280033 kubelet[2459]: I0712 00:17:48.279428 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dcf09fb-a512-439e-938a-bfe4c44b49b4-tigera-ca-bundle\") pod \"calico-kube-controllers-7b5f84d77b-mwdk4\" (UID: \"7dcf09fb-a512-439e-938a-bfe4c44b49b4\") " pod="calico-system/calico-kube-controllers-7b5f84d77b-mwdk4" Jul 12 00:17:48.280033 kubelet[2459]: I0712 00:17:48.279444 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tj7\" (UniqueName: \"kubernetes.io/projected/7dcf09fb-a512-439e-938a-bfe4c44b49b4-kube-api-access-w6tj7\") pod \"calico-kube-controllers-7b5f84d77b-mwdk4\" (UID: \"7dcf09fb-a512-439e-938a-bfe4c44b49b4\") " pod="calico-system/calico-kube-controllers-7b5f84d77b-mwdk4" Jul 12 00:17:48.280033 kubelet[2459]: I0712 00:17:48.279462 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ca0963e-ae80-4ed4-8ecd-1417da594c22-calico-apiserver-certs\") pod \"calico-apiserver-6bf689cfcd-9drtq\" (UID: \"1ca0963e-ae80-4ed4-8ecd-1417da594c22\") " pod="calico-apiserver/calico-apiserver-6bf689cfcd-9drtq" Jul 12 00:17:48.280033 kubelet[2459]: I0712 00:17:48.279480 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9czt\" (UniqueName: \"kubernetes.io/projected/1ca0963e-ae80-4ed4-8ecd-1417da594c22-kube-api-access-l9czt\") pod \"calico-apiserver-6bf689cfcd-9drtq\" (UID: \"1ca0963e-ae80-4ed4-8ecd-1417da594c22\") " pod="calico-apiserver/calico-apiserver-6bf689cfcd-9drtq" Jul 12 00:17:48.280033 kubelet[2459]: I0712 00:17:48.279499 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26782363-597c-473e-9a7b-6c89373057d1-config\") pod \"goldmane-58fd7646b9-jmdtm\" (UID: \"26782363-597c-473e-9a7b-6c89373057d1\") " pod="calico-system/goldmane-58fd7646b9-jmdtm" Jul 12 00:17:48.280158 kubelet[2459]: I0712 00:17:48.279516 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74f3a898-fc16-41f9-a59d-febcf1761d1e-config-volume\") pod \"coredns-7c65d6cfc9-k7xgr\" (UID: \"74f3a898-fc16-41f9-a59d-febcf1761d1e\") " pod="kube-system/coredns-7c65d6cfc9-k7xgr" Jul 12 00:17:48.280158 kubelet[2459]: I0712 00:17:48.279560 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jb6\" (UniqueName: \"kubernetes.io/projected/c50cf70e-483e-49bb-a8b2-b017faf73702-kube-api-access-p9jb6\") pod \"coredns-7c65d6cfc9-knnq4\" (UID: \"c50cf70e-483e-49bb-a8b2-b017faf73702\") " pod="kube-system/coredns-7c65d6cfc9-knnq4" Jul 12 00:17:48.280158 kubelet[2459]: I0712 00:17:48.279638 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/26782363-597c-473e-9a7b-6c89373057d1-goldmane-key-pair\") pod \"goldmane-58fd7646b9-jmdtm\" (UID: \"26782363-597c-473e-9a7b-6c89373057d1\") " pod="calico-system/goldmane-58fd7646b9-jmdtm" Jul 12 00:17:48.280158 kubelet[2459]: I0712 00:17:48.279663 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-ca-bundle\") pod \"whisker-69966d7884-bjkn9\" (UID: \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\") " pod="calico-system/whisker-69966d7884-bjkn9" Jul 12 00:17:48.280158 kubelet[2459]: I0712 00:17:48.279680 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/81143345-0e90-4512-9092-036342474e19-calico-apiserver-certs\") pod \"calico-apiserver-6bf689cfcd-j9lf2\" (UID: \"81143345-0e90-4512-9092-036342474e19\") " pod="calico-apiserver/calico-apiserver-6bf689cfcd-j9lf2" Jul 12 00:17:48.280283 kubelet[2459]: I0712 00:17:48.279718 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65gmb\" (UniqueName: \"kubernetes.io/projected/74f3a898-fc16-41f9-a59d-febcf1761d1e-kube-api-access-65gmb\") pod \"coredns-7c65d6cfc9-k7xgr\" (UID: \"74f3a898-fc16-41f9-a59d-febcf1761d1e\") " pod="kube-system/coredns-7c65d6cfc9-k7xgr" Jul 12 00:17:48.433595 containerd[1448]: time="2025-07-12T00:17:48.433552875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5f84d77b-mwdk4,Uid:7dcf09fb-a512-439e-938a-bfe4c44b49b4,Namespace:calico-system,Attempt:0,}" Jul 12 00:17:48.448390 kubelet[2459]: E0712 00:17:48.448346 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:48.449193 containerd[1448]: time="2025-07-12T00:17:48.449141710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-knnq4,Uid:c50cf70e-483e-49bb-a8b2-b017faf73702,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:48.472104 kubelet[2459]: E0712 00:17:48.472059 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:48.472661 containerd[1448]: time="2025-07-12T00:17:48.472611666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7xgr,Uid:74f3a898-fc16-41f9-a59d-febcf1761d1e,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:48.477301 containerd[1448]: time="2025-07-12T00:17:48.477009102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jmdtm,Uid:26782363-597c-473e-9a7b-6c89373057d1,Namespace:calico-system,Attempt:0,}" Jul 12 00:17:48.508451 containerd[1448]: time="2025-07-12T00:17:48.496644281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69966d7884-bjkn9,Uid:7c756f58-3e4d-4cb6-8d6f-41a98b929020,Namespace:calico-system,Attempt:0,}" Jul 12 00:17:48.508451 containerd[1448]: time="2025-07-12T00:17:48.496893530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-9drtq,Uid:1ca0963e-ae80-4ed4-8ecd-1417da594c22,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:17:48.508451 containerd[1448]: time="2025-07-12T00:17:48.497042215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-j9lf2,Uid:81143345-0e90-4512-9092-036342474e19,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:17:48.577165 containerd[1448]: time="2025-07-12T00:17:48.576474244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:17:49.019042 containerd[1448]: time="2025-07-12T00:17:49.018995293Z" level=error msg="Failed to destroy network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.020858 containerd[1448]: time="2025-07-12T00:17:49.020817475Z" level=error msg="encountered an error cleaning up failed sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.021017 containerd[1448]: time="2025-07-12T00:17:49.020994121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-knnq4,Uid:c50cf70e-483e-49bb-a8b2-b017faf73702,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.021788 kubelet[2459]: E0712 00:17:49.021743 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.022240 containerd[1448]: time="2025-07-12T00:17:49.022189922Z" level=error msg="Failed to destroy network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.022577 containerd[1448]: time="2025-07-12T00:17:49.022548654Z" level=error msg="encountered an error cleaning up failed sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.022652 containerd[1448]: time="2025-07-12T00:17:49.022617656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jmdtm,Uid:26782363-597c-473e-9a7b-6c89373057d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.023222 kubelet[2459]: E0712 00:17:49.022835 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.023222 kubelet[2459]: E0712 00:17:49.022901 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jmdtm" Jul 12 00:17:49.023222 kubelet[2459]: E0712 00:17:49.022921 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jmdtm" Jul 12 00:17:49.023425 kubelet[2459]: E0712 00:17:49.022962 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-jmdtm_calico-system(26782363-597c-473e-9a7b-6c89373057d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-jmdtm_calico-system(26782363-597c-473e-9a7b-6c89373057d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-jmdtm" podUID="26782363-597c-473e-9a7b-6c89373057d1" Jul 12 00:17:49.024953 kubelet[2459]: E0712 00:17:49.024553 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-knnq4" Jul 12 00:17:49.024953 kubelet[2459]: E0712 00:17:49.024619 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-knnq4" Jul 12 00:17:49.024953 kubelet[2459]: E0712 00:17:49.024668 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-knnq4_kube-system(c50cf70e-483e-49bb-a8b2-b017faf73702)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-knnq4_kube-system(c50cf70e-483e-49bb-a8b2-b017faf73702)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-knnq4" podUID="c50cf70e-483e-49bb-a8b2-b017faf73702" Jul 12 00:17:49.032595 containerd[1448]: time="2025-07-12T00:17:49.032543475Z" level=error msg="Failed to destroy network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.032760 containerd[1448]: time="2025-07-12T00:17:49.032543555Z" level=error msg="Failed to destroy network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.032935 containerd[1448]: time="2025-07-12T00:17:49.032894127Z" level=error msg="encountered an error cleaning up failed sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.032984 containerd[1448]: time="2025-07-12T00:17:49.032956569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-j9lf2,Uid:81143345-0e90-4512-9092-036342474e19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.034191 kubelet[2459]: E0712 00:17:49.033204 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.034191 kubelet[2459]: E0712 00:17:49.033264 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf689cfcd-j9lf2" Jul 12 00:17:49.034191 kubelet[2459]: E0712 00:17:49.033283 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf689cfcd-j9lf2" Jul 12 00:17:49.034367 kubelet[2459]: E0712 00:17:49.033320 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf689cfcd-j9lf2_calico-apiserver(81143345-0e90-4512-9092-036342474e19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf689cfcd-j9lf2_calico-apiserver(81143345-0e90-4512-9092-036342474e19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf689cfcd-j9lf2" podUID="81143345-0e90-4512-9092-036342474e19" Jul 12 00:17:49.034622 containerd[1448]: time="2025-07-12T00:17:49.034587864Z" level=error msg="encountered an error cleaning up failed sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.034768 containerd[1448]: time="2025-07-12T00:17:49.034744350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69966d7884-bjkn9,Uid:7c756f58-3e4d-4cb6-8d6f-41a98b929020,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.035035 kubelet[2459]: E0712 00:17:49.035005 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.035136 kubelet[2459]: E0712 00:17:49.035119 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69966d7884-bjkn9" Jul 12 00:17:49.035207 kubelet[2459]: E0712 00:17:49.035192 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69966d7884-bjkn9" Jul 12 00:17:49.035307 kubelet[2459]: E0712 00:17:49.035284 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69966d7884-bjkn9_calico-system(7c756f58-3e4d-4cb6-8d6f-41a98b929020)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69966d7884-bjkn9_calico-system(7c756f58-3e4d-4cb6-8d6f-41a98b929020)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69966d7884-bjkn9" podUID="7c756f58-3e4d-4cb6-8d6f-41a98b929020" Jul 12 00:17:49.037497 containerd[1448]: time="2025-07-12T00:17:49.037451922Z" level=error msg="Failed to destroy network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.039539 containerd[1448]: time="2025-07-12T00:17:49.038905732Z" level=error msg="Failed to destroy network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.039807 containerd[1448]: time="2025-07-12T00:17:49.039711599Z" level=error msg="encountered an error cleaning up failed sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.039862 containerd[1448]: time="2025-07-12T00:17:49.039823003Z" level=error msg="encountered an error cleaning up failed sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.039911 containerd[1448]: time="2025-07-12T00:17:49.039884765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-9drtq,Uid:1ca0963e-ae80-4ed4-8ecd-1417da594c22,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.039975 containerd[1448]: time="2025-07-12T00:17:49.039824683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7xgr,Uid:74f3a898-fc16-41f9-a59d-febcf1761d1e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.040142 kubelet[2459]: E0712 00:17:49.040103 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.040196 kubelet[2459]: E0712 00:17:49.040155 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf689cfcd-9drtq" Jul 12 00:17:49.040196 kubelet[2459]: E0712 00:17:49.040174 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf689cfcd-9drtq" Jul 12 00:17:49.040240 kubelet[2459]: E0712 00:17:49.040212 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf689cfcd-9drtq_calico-apiserver(1ca0963e-ae80-4ed4-8ecd-1417da594c22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf689cfcd-9drtq_calico-apiserver(1ca0963e-ae80-4ed4-8ecd-1417da594c22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf689cfcd-9drtq" podUID="1ca0963e-ae80-4ed4-8ecd-1417da594c22" Jul 12 00:17:49.040310 kubelet[2459]: E0712 00:17:49.040114 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.040662 kubelet[2459]: E0712 00:17:49.040535 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-k7xgr" Jul 12 00:17:49.040662 kubelet[2459]: E0712 00:17:49.040574 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-k7xgr" Jul 12 00:17:49.040662 kubelet[2459]: E0712 00:17:49.040618 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-k7xgr_kube-system(74f3a898-fc16-41f9-a59d-febcf1761d1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-k7xgr_kube-system(74f3a898-fc16-41f9-a59d-febcf1761d1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-k7xgr" podUID="74f3a898-fc16-41f9-a59d-febcf1761d1e" Jul 12 00:17:49.041337 containerd[1448]: time="2025-07-12T00:17:49.041306493Z" level=error msg="Failed to destroy network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.041735 containerd[1448]: time="2025-07-12T00:17:49.041705187Z" level=error msg="encountered an error cleaning up failed sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.041947 containerd[1448]: time="2025-07-12T00:17:49.041917954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5f84d77b-mwdk4,Uid:7dcf09fb-a512-439e-938a-bfe4c44b49b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.042441 kubelet[2459]: E0712 00:17:49.042178 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.042441 kubelet[2459]: E0712 00:17:49.042216 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5f84d77b-mwdk4" Jul 12 00:17:49.042441 kubelet[2459]: E0712 00:17:49.042236 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5f84d77b-mwdk4" Jul 12 00:17:49.042634 kubelet[2459]: E0712 00:17:49.042410 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b5f84d77b-mwdk4_calico-system(7dcf09fb-a512-439e-938a-bfe4c44b49b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b5f84d77b-mwdk4_calico-system(7dcf09fb-a512-439e-938a-bfe4c44b49b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b5f84d77b-mwdk4" podUID="7dcf09fb-a512-439e-938a-bfe4c44b49b4" Jul 12 00:17:49.194172 systemd[1]: Created slice kubepods-besteffort-pod015b368c_3c89_4707_af85_1b98a6fb48da.slice - libcontainer container kubepods-besteffort-pod015b368c_3c89_4707_af85_1b98a6fb48da.slice. Jul 12 00:17:49.197108 containerd[1448]: time="2025-07-12T00:17:49.197060641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfgr2,Uid:015b368c-3c89-4707-af85-1b98a6fb48da,Namespace:calico-system,Attempt:0,}" Jul 12 00:17:49.262231 containerd[1448]: time="2025-07-12T00:17:49.262091817Z" level=error msg="Failed to destroy network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.262706 containerd[1448]: time="2025-07-12T00:17:49.262562193Z" level=error msg="encountered an error cleaning up failed sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.262706 containerd[1448]: time="2025-07-12T00:17:49.262613075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfgr2,Uid:015b368c-3c89-4707-af85-1b98a6fb48da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.262893 kubelet[2459]: E0712 00:17:49.262844 2459 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.262942 kubelet[2459]: E0712 00:17:49.262904 2459 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wfgr2" Jul 12 00:17:49.262942 kubelet[2459]: E0712 00:17:49.262927 2459 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wfgr2" Jul 12 00:17:49.262997 kubelet[2459]: E0712 00:17:49.262963 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wfgr2_calico-system(015b368c-3c89-4707-af85-1b98a6fb48da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wfgr2_calico-system(015b368c-3c89-4707-af85-1b98a6fb48da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wfgr2" podUID="015b368c-3c89-4707-af85-1b98a6fb48da" Jul 12 00:17:49.392954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb-shm.mount: Deactivated successfully. Jul 12 00:17:49.571779 kubelet[2459]: I0712 00:17:49.571458 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:17:49.572976 containerd[1448]: time="2025-07-12T00:17:49.572546636Z" level=info msg="StopPodSandbox for \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\"" Jul 12 00:17:49.572976 containerd[1448]: time="2025-07-12T00:17:49.572744203Z" level=info msg="Ensure that sandbox a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb in task-service has been cleanup successfully" Jul 12 00:17:49.573295 kubelet[2459]: I0712 00:17:49.573169 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:17:49.573755 containerd[1448]: time="2025-07-12T00:17:49.573717476Z" level=info msg="StopPodSandbox for \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\"" Jul 12 00:17:49.574400 containerd[1448]: time="2025-07-12T00:17:49.573880001Z" level=info msg="Ensure that sandbox 5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515 in task-service has been cleanup successfully" Jul 12 00:17:49.578804 kubelet[2459]: I0712 00:17:49.578771 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:17:49.580561 containerd[1448]: time="2025-07-12T00:17:49.580522388Z" level=info msg="StopPodSandbox for \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\"" Jul 12 00:17:49.580727 containerd[1448]: time="2025-07-12T00:17:49.580694433Z" level=info msg="Ensure that sandbox 6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882 in task-service has been cleanup successfully" Jul 12 00:17:49.582318 kubelet[2459]: I0712 00:17:49.582224 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:17:49.585773 kubelet[2459]: I0712 00:17:49.585717 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:17:49.587464 kubelet[2459]: I0712 00:17:49.587416 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:17:49.596282 containerd[1448]: time="2025-07-12T00:17:49.596135640Z" level=info msg="StopPodSandbox for \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\"" Jul 12 00:17:49.601541 containerd[1448]: time="2025-07-12T00:17:49.601502102Z" level=info msg="Ensure that sandbox 4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78 in task-service has been cleanup successfully" Jul 12 00:17:49.605854 containerd[1448]: time="2025-07-12T00:17:49.605550040Z" level=info msg="StopPodSandbox for \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\"" Jul 12 00:17:49.605854 containerd[1448]: time="2025-07-12T00:17:49.605717806Z" level=info msg="Ensure that sandbox 690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518 in task-service has been cleanup successfully" Jul 12 00:17:49.607116 containerd[1448]: time="2025-07-12T00:17:49.607081933Z" level=info msg="StopPodSandbox for \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\"" Jul 12 00:17:49.607258 containerd[1448]: time="2025-07-12T00:17:49.607235498Z" level=info msg="Ensure that sandbox 37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2 in task-service has been cleanup successfully" Jul 12 00:17:49.607986 kubelet[2459]: I0712 00:17:49.607956 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:17:49.609742 containerd[1448]: time="2025-07-12T00:17:49.609623699Z" level=info msg="StopPodSandbox for \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\"" Jul 12 00:17:49.609874 containerd[1448]: time="2025-07-12T00:17:49.609851987Z" level=info msg="Ensure that sandbox 72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0 in task-service has been cleanup successfully" Jul 12 00:17:49.613000 kubelet[2459]: I0712 00:17:49.612362 2459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:17:49.613161 containerd[1448]: time="2025-07-12T00:17:49.613121138Z" level=info msg="StopPodSandbox for \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\"" Jul 12 00:17:49.613331 containerd[1448]: time="2025-07-12T00:17:49.613311385Z" level=info msg="Ensure that sandbox dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770 in task-service has been cleanup successfully" Jul 12 00:17:49.645338 containerd[1448]: time="2025-07-12T00:17:49.644638412Z" level=error msg="StopPodSandbox for \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\" failed" error="failed to destroy network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.645503 kubelet[2459]: E0712 00:17:49.644886 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:17:49.645503 kubelet[2459]: E0712 00:17:49.644950 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515"} Jul 12 00:17:49.645503 kubelet[2459]: E0712 00:17:49.645184 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81143345-0e90-4512-9092-036342474e19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.645503 kubelet[2459]: E0712 00:17:49.645208 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81143345-0e90-4512-9092-036342474e19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf689cfcd-j9lf2" podUID="81143345-0e90-4512-9092-036342474e19" Jul 12 00:17:49.645935 containerd[1448]: time="2025-07-12T00:17:49.645419119Z" level=error msg="StopPodSandbox for \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\" failed" error="failed to destroy network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.645968 kubelet[2459]: E0712 00:17:49.645600 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:17:49.645968 kubelet[2459]: E0712 00:17:49.645637 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb"} Jul 12 00:17:49.645968 kubelet[2459]: E0712 00:17:49.645750 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7dcf09fb-a512-439e-938a-bfe4c44b49b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.645968 kubelet[2459]: E0712 00:17:49.645778 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7dcf09fb-a512-439e-938a-bfe4c44b49b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b5f84d77b-mwdk4" podUID="7dcf09fb-a512-439e-938a-bfe4c44b49b4" Jul 12 00:17:49.665956 containerd[1448]: time="2025-07-12T00:17:49.665901497Z" level=error msg="StopPodSandbox for \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\" failed" error="failed to destroy network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.673485 kubelet[2459]: E0712 00:17:49.673175 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:17:49.673485 kubelet[2459]: E0712 00:17:49.673276 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2"} Jul 12 00:17:49.673485 kubelet[2459]: E0712 00:17:49.673312 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ca0963e-ae80-4ed4-8ecd-1417da594c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.673485 kubelet[2459]: E0712 00:17:49.673335 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ca0963e-ae80-4ed4-8ecd-1417da594c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf689cfcd-9drtq" podUID="1ca0963e-ae80-4ed4-8ecd-1417da594c22" Jul 12 00:17:49.680716 containerd[1448]: time="2025-07-12T00:17:49.680579197Z" level=error msg="StopPodSandbox for \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\" failed" error="failed to destroy network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.681539 kubelet[2459]: E0712 00:17:49.680835 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:17:49.681539 kubelet[2459]: E0712 00:17:49.680892 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78"} Jul 12 00:17:49.681539 kubelet[2459]: E0712 00:17:49.680937 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26782363-597c-473e-9a7b-6c89373057d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.681539 kubelet[2459]: E0712 00:17:49.680963 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26782363-597c-473e-9a7b-6c89373057d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-jmdtm" podUID="26782363-597c-473e-9a7b-6c89373057d1" Jul 12 00:17:49.686830 containerd[1448]: time="2025-07-12T00:17:49.686774648Z" level=error msg="StopPodSandbox for \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\" failed" error="failed to destroy network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.687071 kubelet[2459]: E0712 00:17:49.687026 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:17:49.687137 kubelet[2459]: E0712 00:17:49.687084 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882"} Jul 12 00:17:49.687137 kubelet[2459]: E0712 00:17:49.687126 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.687234 kubelet[2459]: E0712 00:17:49.687151 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69966d7884-bjkn9" podUID="7c756f58-3e4d-4cb6-8d6f-41a98b929020" Jul 12 00:17:49.690551 containerd[1448]: time="2025-07-12T00:17:49.690512776Z" level=error msg="StopPodSandbox for \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\" failed" error="failed to destroy network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.690875 kubelet[2459]: E0712 00:17:49.690839 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:17:49.690925 kubelet[2459]: E0712 00:17:49.690883 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770"} Jul 12 00:17:49.690925 kubelet[2459]: E0712 00:17:49.690918 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74f3a898-fc16-41f9-a59d-febcf1761d1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.691001 kubelet[2459]: E0712 00:17:49.690939 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74f3a898-fc16-41f9-a59d-febcf1761d1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-k7xgr" podUID="74f3a898-fc16-41f9-a59d-febcf1761d1e" Jul 12 00:17:49.692375 containerd[1448]: time="2025-07-12T00:17:49.692340118Z" level=error msg="StopPodSandbox for \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\" failed" error="failed to destroy network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.692544 kubelet[2459]: E0712 00:17:49.692512 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:17:49.692580 kubelet[2459]: E0712 00:17:49.692550 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518"} Jul 12 00:17:49.692580 kubelet[2459]: E0712 00:17:49.692573 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c50cf70e-483e-49bb-a8b2-b017faf73702\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.692650 kubelet[2459]: E0712 00:17:49.692593 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c50cf70e-483e-49bb-a8b2-b017faf73702\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-knnq4" podUID="c50cf70e-483e-49bb-a8b2-b017faf73702" Jul 12 00:17:49.697023 containerd[1448]: time="2025-07-12T00:17:49.696940715Z" level=error msg="StopPodSandbox for \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\" failed" error="failed to destroy network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:17:49.697173 kubelet[2459]: E0712 00:17:49.697128 2459 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:17:49.697218 kubelet[2459]: E0712 00:17:49.697169 2459 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0"} Jul 12 00:17:49.697218 kubelet[2459]: E0712 00:17:49.697196 2459 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"015b368c-3c89-4707-af85-1b98a6fb48da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:17:49.697279 kubelet[2459]: E0712 00:17:49.697213 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"015b368c-3c89-4707-af85-1b98a6fb48da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wfgr2" podUID="015b368c-3c89-4707-af85-1b98a6fb48da" Jul 12 00:17:51.968512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191492178.mount: Deactivated successfully. Jul 12 00:17:52.147753 containerd[1448]: time="2025-07-12T00:17:52.147688300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:52.148774 containerd[1448]: time="2025-07-12T00:17:52.148736731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:17:52.149902 containerd[1448]: time="2025-07-12T00:17:52.149865325Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:52.152193 containerd[1448]: time="2025-07-12T00:17:52.152132993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:52.153029 containerd[1448]: time="2025-07-12T00:17:52.152776973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.575481059s" Jul 12 00:17:52.153029 containerd[1448]: time="2025-07-12T00:17:52.152814054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:17:52.165271 containerd[1448]: time="2025-07-12T00:17:52.165217906Z" level=info msg="CreateContainer within sandbox \"d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:17:52.187766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123142058.mount: Deactivated successfully. Jul 12 00:17:52.189938 containerd[1448]: time="2025-07-12T00:17:52.189881207Z" level=info msg="CreateContainer within sandbox \"d0a942ca14fffda77d781d0d084ecea70490c08f94ee03a1eda8ec822ba6f7dd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2dc2eb2c1fb08f28b4ad6ead3460e94cbc019c3118a35cdf85df066027f5c00c\"" Jul 12 00:17:52.190676 containerd[1448]: time="2025-07-12T00:17:52.190614629Z" level=info msg="StartContainer for \"2dc2eb2c1fb08f28b4ad6ead3460e94cbc019c3118a35cdf85df066027f5c00c\"" Jul 12 00:17:52.278898 systemd[1]: Started cri-containerd-2dc2eb2c1fb08f28b4ad6ead3460e94cbc019c3118a35cdf85df066027f5c00c.scope - libcontainer container 2dc2eb2c1fb08f28b4ad6ead3460e94cbc019c3118a35cdf85df066027f5c00c. Jul 12 00:17:52.364756 containerd[1448]: time="2025-07-12T00:17:52.364637776Z" level=info msg="StartContainer for \"2dc2eb2c1fb08f28b4ad6ead3460e94cbc019c3118a35cdf85df066027f5c00c\" returns successfully" Jul 12 00:17:52.563927 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:17:52.564033 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:17:52.644042 kubelet[2459]: I0712 00:17:52.643843 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l274m" podStartSLOduration=1.403302657 podStartE2EDuration="11.643827042s" podCreationTimestamp="2025-07-12 00:17:41 +0000 UTC" firstStartedPulling="2025-07-12 00:17:41.913197336 +0000 UTC m=+18.802878112" lastFinishedPulling="2025-07-12 00:17:52.153721681 +0000 UTC m=+29.043402497" observedRunningTime="2025-07-12 00:17:52.642472601 +0000 UTC m=+29.532153417" watchObservedRunningTime="2025-07-12 00:17:52.643827042 +0000 UTC m=+29.533507858" Jul 12 00:17:52.704725 containerd[1448]: time="2025-07-12T00:17:52.704671030Z" level=info msg="StopPodSandbox for \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\"" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:52.859 [INFO][3728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:52.861 [INFO][3728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" iface="eth0" netns="/var/run/netns/cni-3f088b9e-c336-02f8-1e27-9e2886bc8063" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:52.861 [INFO][3728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" iface="eth0" netns="/var/run/netns/cni-3f088b9e-c336-02f8-1e27-9e2886bc8063" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:52.866 [INFO][3728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" iface="eth0" netns="/var/run/netns/cni-3f088b9e-c336-02f8-1e27-9e2886bc8063" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:52.867 [INFO][3728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:52.867 [INFO][3728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:53.042 [INFO][3739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:53.042 [INFO][3739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:53.042 [INFO][3739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:53.052 [WARNING][3739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:53.052 [INFO][3739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:53.053 [INFO][3739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:17:53.057629 containerd[1448]: 2025-07-12 00:17:53.055 [INFO][3728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:17:53.058112 containerd[1448]: time="2025-07-12T00:17:53.057773807Z" level=info msg="TearDown network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\" successfully" Jul 12 00:17:53.058112 containerd[1448]: time="2025-07-12T00:17:53.057802608Z" level=info msg="StopPodSandbox for \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\" returns successfully" Jul 12 00:17:53.060193 systemd[1]: run-netns-cni\x2d3f088b9e\x2dc336\x2d02f8\x2d1e27\x2d9e2886bc8063.mount: Deactivated successfully. Jul 12 00:17:53.225487 kubelet[2459]: I0712 00:17:53.225446 2459 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-backend-key-pair\") pod \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\" (UID: \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\") " Jul 12 00:17:53.225613 kubelet[2459]: I0712 00:17:53.225498 2459 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-ca-bundle\") pod \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\" (UID: \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\") " Jul 12 00:17:53.225613 kubelet[2459]: I0712 00:17:53.225527 2459 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf8bt\" (UniqueName: \"kubernetes.io/projected/7c756f58-3e4d-4cb6-8d6f-41a98b929020-kube-api-access-qf8bt\") pod \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\" (UID: \"7c756f58-3e4d-4cb6-8d6f-41a98b929020\") " Jul 12 00:17:53.226085 kubelet[2459]: I0712 00:17:53.226027 2459 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7c756f58-3e4d-4cb6-8d6f-41a98b929020" (UID: "7c756f58-3e4d-4cb6-8d6f-41a98b929020"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:17:53.230006 systemd[1]: var-lib-kubelet-pods-7c756f58\x2d3e4d\x2d4cb6\x2d8d6f\x2d41a98b929020-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqf8bt.mount: Deactivated successfully. Jul 12 00:17:53.230107 systemd[1]: var-lib-kubelet-pods-7c756f58\x2d3e4d\x2d4cb6\x2d8d6f\x2d41a98b929020-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:17:53.231822 kubelet[2459]: I0712 00:17:53.231658 2459 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7c756f58-3e4d-4cb6-8d6f-41a98b929020" (UID: "7c756f58-3e4d-4cb6-8d6f-41a98b929020"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:17:53.231822 kubelet[2459]: I0712 00:17:53.231777 2459 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c756f58-3e4d-4cb6-8d6f-41a98b929020-kube-api-access-qf8bt" (OuterVolumeSpecName: "kube-api-access-qf8bt") pod "7c756f58-3e4d-4cb6-8d6f-41a98b929020" (UID: "7c756f58-3e4d-4cb6-8d6f-41a98b929020"). InnerVolumeSpecName "kube-api-access-qf8bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:17:53.326529 kubelet[2459]: I0712 00:17:53.326354 2459 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 00:17:53.326529 kubelet[2459]: I0712 00:17:53.326414 2459 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf8bt\" (UniqueName: \"kubernetes.io/projected/7c756f58-3e4d-4cb6-8d6f-41a98b929020-kube-api-access-qf8bt\") on node \"localhost\" DevicePath \"\"" Jul 12 00:17:53.326529 kubelet[2459]: I0712 00:17:53.326425 2459 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c756f58-3e4d-4cb6-8d6f-41a98b929020-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 00:17:53.626452 kubelet[2459]: I0712 00:17:53.626332 2459 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:17:53.632862 systemd[1]: Removed slice kubepods-besteffort-pod7c756f58_3e4d_4cb6_8d6f_41a98b929020.slice - libcontainer container kubepods-besteffort-pod7c756f58_3e4d_4cb6_8d6f_41a98b929020.slice. Jul 12 00:17:53.699966 systemd[1]: Created slice kubepods-besteffort-podc8a58c63_2409_45fe_9e3b_057aed0d2018.slice - libcontainer container kubepods-besteffort-podc8a58c63_2409_45fe_9e3b_057aed0d2018.slice. Jul 12 00:17:53.829440 kubelet[2459]: I0712 00:17:53.829341 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a58c63-2409-45fe-9e3b-057aed0d2018-whisker-ca-bundle\") pod \"whisker-677875d5b8-nzxqn\" (UID: \"c8a58c63-2409-45fe-9e3b-057aed0d2018\") " pod="calico-system/whisker-677875d5b8-nzxqn" Jul 12 00:17:53.829440 kubelet[2459]: I0712 00:17:53.829413 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8a58c63-2409-45fe-9e3b-057aed0d2018-whisker-backend-key-pair\") pod \"whisker-677875d5b8-nzxqn\" (UID: \"c8a58c63-2409-45fe-9e3b-057aed0d2018\") " pod="calico-system/whisker-677875d5b8-nzxqn" Jul 12 00:17:53.829440 kubelet[2459]: I0712 00:17:53.829443 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjb6\" (UniqueName: \"kubernetes.io/projected/c8a58c63-2409-45fe-9e3b-057aed0d2018-kube-api-access-xcjb6\") pod \"whisker-677875d5b8-nzxqn\" (UID: \"c8a58c63-2409-45fe-9e3b-057aed0d2018\") " pod="calico-system/whisker-677875d5b8-nzxqn" Jul 12 00:17:54.004235 containerd[1448]: time="2025-07-12T00:17:54.004124830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677875d5b8-nzxqn,Uid:c8a58c63-2409-45fe-9e3b-057aed0d2018,Namespace:calico-system,Attempt:0,}" Jul 12 00:17:54.233713 systemd-networkd[1385]: cali0ea3e985544: Link UP Jul 12 00:17:54.233949 systemd-networkd[1385]: cali0ea3e985544: Gained carrier Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.076 [INFO][3787] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.097 [INFO][3787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--677875d5b8--nzxqn-eth0 whisker-677875d5b8- calico-system c8a58c63-2409-45fe-9e3b-057aed0d2018 878 0 2025-07-12 00:17:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:677875d5b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-677875d5b8-nzxqn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0ea3e985544 [] [] }} ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.097 [INFO][3787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.156 [INFO][3872] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" HandleID="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Workload="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.156 [INFO][3872] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" HandleID="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Workload="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-677875d5b8-nzxqn", "timestamp":"2025-07-12 00:17:54.156613781 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.156 [INFO][3872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.156 [INFO][3872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.156 [INFO][3872] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.175 [INFO][3872] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.187 [INFO][3872] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.200 [INFO][3872] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.202 [INFO][3872] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.204 [INFO][3872] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.204 [INFO][3872] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.206 [INFO][3872] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1 Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.213 [INFO][3872] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.222 [INFO][3872] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.222 [INFO][3872] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" host="localhost" Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.222 [INFO][3872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:17:54.250314 containerd[1448]: 2025-07-12 00:17:54.222 [INFO][3872] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" HandleID="k8s-pod-network.186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Workload="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" Jul 12 00:17:54.250897 containerd[1448]: 2025-07-12 00:17:54.225 [INFO][3787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--677875d5b8--nzxqn-eth0", GenerateName:"whisker-677875d5b8-", Namespace:"calico-system", SelfLink:"", UID:"c8a58c63-2409-45fe-9e3b-057aed0d2018", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"677875d5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-677875d5b8-nzxqn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0ea3e985544", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:17:54.250897 containerd[1448]: 2025-07-12 00:17:54.225 [INFO][3787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" Jul 12 00:17:54.250897 containerd[1448]: 2025-07-12 00:17:54.225 [INFO][3787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ea3e985544 ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" Jul 12 00:17:54.250897 containerd[1448]: 2025-07-12 00:17:54.234 [INFO][3787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" Jul 12 00:17:54.250897 containerd[1448]: 2025-07-12 00:17:54.235 [INFO][3787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--677875d5b8--nzxqn-eth0", GenerateName:"whisker-677875d5b8-", Namespace:"calico-system", SelfLink:"", UID:"c8a58c63-2409-45fe-9e3b-057aed0d2018", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"677875d5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1", Pod:"whisker-677875d5b8-nzxqn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0ea3e985544", MAC:"16:ea:78:13:bf:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:17:54.250897 containerd[1448]: 2025-07-12 00:17:54.247 [INFO][3787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1" Namespace="calico-system" Pod="whisker-677875d5b8-nzxqn" WorkloadEndpoint="localhost-k8s-whisker--677875d5b8--nzxqn-eth0" Jul 12 00:17:54.273770 containerd[1448]: time="2025-07-12T00:17:54.273159815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:54.273770 containerd[1448]: time="2025-07-12T00:17:54.273608867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:54.273770 containerd[1448]: time="2025-07-12T00:17:54.273623348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:54.273770 containerd[1448]: time="2025-07-12T00:17:54.273718470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:54.292591 systemd[1]: Started cri-containerd-186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1.scope - libcontainer container 186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1. Jul 12 00:17:54.309699 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:17:54.334708 containerd[1448]: time="2025-07-12T00:17:54.334603040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677875d5b8-nzxqn,Uid:c8a58c63-2409-45fe-9e3b-057aed0d2018,Namespace:calico-system,Attempt:0,} returns sandbox id \"186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1\"" Jul 12 00:17:54.338097 containerd[1448]: time="2025-07-12T00:17:54.337717126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:17:55.157356 containerd[1448]: time="2025-07-12T00:17:55.157302703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:55.158144 containerd[1448]: time="2025-07-12T00:17:55.158103805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:17:55.158815 containerd[1448]: time="2025-07-12T00:17:55.158783823Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:55.161035 containerd[1448]: time="2025-07-12T00:17:55.160990042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:55.161915 containerd[1448]: time="2025-07-12T00:17:55.161884386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 824.126658ms" Jul 12 00:17:55.161981 containerd[1448]: time="2025-07-12T00:17:55.161917947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:17:55.164009 containerd[1448]: time="2025-07-12T00:17:55.163956521Z" level=info msg="CreateContainer within sandbox \"186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:17:55.182090 containerd[1448]: time="2025-07-12T00:17:55.182026724Z" level=info msg="CreateContainer within sandbox \"186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4e354bfc921423adf42b4f2f5d59a392d7746970d3ca00c4ee641c4764438e06\"" Jul 12 00:17:55.182654 containerd[1448]: time="2025-07-12T00:17:55.182619900Z" level=info msg="StartContainer for \"4e354bfc921423adf42b4f2f5d59a392d7746970d3ca00c4ee641c4764438e06\"" Jul 12 00:17:55.191034 kubelet[2459]: I0712 00:17:55.190998 2459 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c756f58-3e4d-4cb6-8d6f-41a98b929020" path="/var/lib/kubelet/pods/7c756f58-3e4d-4cb6-8d6f-41a98b929020/volumes" Jul 12 00:17:55.214592 systemd[1]: Started cri-containerd-4e354bfc921423adf42b4f2f5d59a392d7746970d3ca00c4ee641c4764438e06.scope - libcontainer container 4e354bfc921423adf42b4f2f5d59a392d7746970d3ca00c4ee641c4764438e06. Jul 12 00:17:55.265573 containerd[1448]: time="2025-07-12T00:17:55.265533914Z" level=info msg="StartContainer for \"4e354bfc921423adf42b4f2f5d59a392d7746970d3ca00c4ee641c4764438e06\" returns successfully" Jul 12 00:17:55.267759 containerd[1448]: time="2025-07-12T00:17:55.267719372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:17:55.601518 systemd-networkd[1385]: cali0ea3e985544: Gained IPv6LL Jul 12 00:17:56.588126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085385576.mount: Deactivated successfully. Jul 12 00:17:56.618864 containerd[1448]: time="2025-07-12T00:17:56.617993794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:56.618864 containerd[1448]: time="2025-07-12T00:17:56.618808895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:17:56.619656 containerd[1448]: time="2025-07-12T00:17:56.619624476Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:56.621895 containerd[1448]: time="2025-07-12T00:17:56.621858374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:56.623165 containerd[1448]: time="2025-07-12T00:17:56.623128487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.355369073s" Jul 12 00:17:56.623634 containerd[1448]: time="2025-07-12T00:17:56.623611459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:17:56.626159 containerd[1448]: time="2025-07-12T00:17:56.626124764Z" level=info msg="CreateContainer within sandbox \"186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:17:56.637916 containerd[1448]: time="2025-07-12T00:17:56.637872226Z" level=info msg="CreateContainer within sandbox \"186fb1a2d29c71924db8062593563e950d4719566bcfa7c2012d23076097d8d1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a892ad48fac1f17055a787adff394e619ea377813bd89f640d93db928b4320ed\"" Jul 12 00:17:56.638670 containerd[1448]: time="2025-07-12T00:17:56.638630085Z" level=info msg="StartContainer for \"a892ad48fac1f17055a787adff394e619ea377813bd89f640d93db928b4320ed\"" Jul 12 00:17:56.680649 systemd[1]: Started cri-containerd-a892ad48fac1f17055a787adff394e619ea377813bd89f640d93db928b4320ed.scope - libcontainer container a892ad48fac1f17055a787adff394e619ea377813bd89f640d93db928b4320ed. Jul 12 00:17:56.719561 containerd[1448]: time="2025-07-12T00:17:56.719505647Z" level=info msg="StartContainer for \"a892ad48fac1f17055a787adff394e619ea377813bd89f640d93db928b4320ed\" returns successfully" Jul 12 00:17:57.604161 kubelet[2459]: I0712 00:17:57.604108 2459 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:17:57.604570 kubelet[2459]: E0712 00:17:57.604514 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:57.644205 kubelet[2459]: E0712 00:17:57.644169 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:58.389442 kernel: bpftool[4118]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:17:58.568017 systemd-networkd[1385]: vxlan.calico: Link UP Jul 12 00:17:58.568027 systemd-networkd[1385]: vxlan.calico: Gained carrier Jul 12 00:18:00.188309 containerd[1448]: time="2025-07-12T00:18:00.188257994Z" level=info msg="StopPodSandbox for \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\"" Jul 12 00:18:00.273544 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Jul 12 00:18:00.278164 kubelet[2459]: I0712 00:18:00.277907 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-677875d5b8-nzxqn" podStartSLOduration=4.989193073 podStartE2EDuration="7.277866002s" podCreationTimestamp="2025-07-12 00:17:53 +0000 UTC" firstStartedPulling="2025-07-12 00:17:54.335857194 +0000 UTC m=+31.225538010" lastFinishedPulling="2025-07-12 00:17:56.624530123 +0000 UTC m=+33.514210939" observedRunningTime="2025-07-12 00:17:57.664598482 +0000 UTC m=+34.554279298" watchObservedRunningTime="2025-07-12 00:18:00.277866002 +0000 UTC m=+37.167546778" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.277 [INFO][4249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.278 [INFO][4249] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" iface="eth0" netns="/var/run/netns/cni-fe4982b6-8078-f7cf-b662-60e9db89530b" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.278 [INFO][4249] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" iface="eth0" netns="/var/run/netns/cni-fe4982b6-8078-f7cf-b662-60e9db89530b" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.278 [INFO][4249] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" iface="eth0" netns="/var/run/netns/cni-fe4982b6-8078-f7cf-b662-60e9db89530b" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.278 [INFO][4249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.278 [INFO][4249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.310 [INFO][4258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.310 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.310 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.337 [WARNING][4258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.337 [INFO][4258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.339 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:00.342969 containerd[1448]: 2025-07-12 00:18:00.341 [INFO][4249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:00.343375 containerd[1448]: time="2025-07-12T00:18:00.343124304Z" level=info msg="TearDown network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\" successfully" Jul 12 00:18:00.343375 containerd[1448]: time="2025-07-12T00:18:00.343151945Z" level=info msg="StopPodSandbox for \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\" returns successfully" Jul 12 00:18:00.343821 containerd[1448]: time="2025-07-12T00:18:00.343792319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5f84d77b-mwdk4,Uid:7dcf09fb-a512-439e-938a-bfe4c44b49b4,Namespace:calico-system,Attempt:1,}" Jul 12 00:18:00.345736 systemd[1]: run-netns-cni\x2dfe4982b6\x2d8078\x2df7cf\x2db662\x2d60e9db89530b.mount: Deactivated successfully. Jul 12 00:18:00.555838 systemd-networkd[1385]: cali16dd4202b4d: Link UP Jul 12 00:18:00.556208 systemd-networkd[1385]: cali16dd4202b4d: Gained carrier Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.472 [INFO][4266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0 calico-kube-controllers-7b5f84d77b- calico-system 7dcf09fb-a512-439e-938a-bfe4c44b49b4 920 0 2025-07-12 00:17:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b5f84d77b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b5f84d77b-mwdk4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali16dd4202b4d [] [] }} ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.472 [INFO][4266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.500 [INFO][4281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" HandleID="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.500 [INFO][4281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" HandleID="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b5f84d77b-mwdk4", "timestamp":"2025-07-12 00:18:00.500267706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.500 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.500 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.500 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.516 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.524 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.531 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.533 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.537 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.537 [INFO][4281] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.538 [INFO][4281] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710 Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.542 [INFO][4281] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.551 [INFO][4281] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.551 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" host="localhost" Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.551 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:00.572247 containerd[1448]: 2025-07-12 00:18:00.551 [INFO][4281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" HandleID="k8s-pod-network.3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.573097 containerd[1448]: 2025-07-12 00:18:00.553 [INFO][4266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0", GenerateName:"calico-kube-controllers-7b5f84d77b-", Namespace:"calico-system", SelfLink:"", UID:"7dcf09fb-a512-439e-938a-bfe4c44b49b4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5f84d77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b5f84d77b-mwdk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16dd4202b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:00.573097 containerd[1448]: 2025-07-12 00:18:00.553 [INFO][4266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.573097 containerd[1448]: 2025-07-12 00:18:00.553 [INFO][4266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16dd4202b4d ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.573097 containerd[1448]: 2025-07-12 00:18:00.556 [INFO][4266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.573097 containerd[1448]: 2025-07-12 00:18:00.557 [INFO][4266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0", GenerateName:"calico-kube-controllers-7b5f84d77b-", Namespace:"calico-system", SelfLink:"", UID:"7dcf09fb-a512-439e-938a-bfe4c44b49b4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5f84d77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710", Pod:"calico-kube-controllers-7b5f84d77b-mwdk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16dd4202b4d", MAC:"4e:9f:9d:fb:d5:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:00.573097 containerd[1448]: 2025-07-12 00:18:00.569 [INFO][4266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710" Namespace="calico-system" Pod="calico-kube-controllers-7b5f84d77b-mwdk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:00.586645 containerd[1448]: time="2025-07-12T00:18:00.586560800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:00.586645 containerd[1448]: time="2025-07-12T00:18:00.586624921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:00.586645 containerd[1448]: time="2025-07-12T00:18:00.586647802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:00.586887 containerd[1448]: time="2025-07-12T00:18:00.586738484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:00.619589 systemd[1]: Started cri-containerd-3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710.scope - libcontainer container 3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710. Jul 12 00:18:00.632100 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:00.648282 containerd[1448]: time="2025-07-12T00:18:00.648229302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5f84d77b-mwdk4,Uid:7dcf09fb-a512-439e-938a-bfe4c44b49b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710\"" Jul 12 00:18:00.649941 containerd[1448]: time="2025-07-12T00:18:00.649908019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:18:01.188323 containerd[1448]: time="2025-07-12T00:18:01.188157070Z" level=info msg="StopPodSandbox for \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\"" Jul 12 00:18:01.188323 containerd[1448]: time="2025-07-12T00:18:01.188184511Z" level=info msg="StopPodSandbox for \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\"" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.261 [INFO][4356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.261 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" iface="eth0" netns="/var/run/netns/cni-b310f1c0-95bd-0592-55cf-429d00bd97e5" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.262 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" iface="eth0" netns="/var/run/netns/cni-b310f1c0-95bd-0592-55cf-429d00bd97e5" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.263 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" iface="eth0" netns="/var/run/netns/cni-b310f1c0-95bd-0592-55cf-429d00bd97e5" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.263 [INFO][4356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.263 [INFO][4356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.294 [INFO][4377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.294 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.294 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.303 [WARNING][4377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.303 [INFO][4377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.305 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:01.308585 containerd[1448]: 2025-07-12 00:18:01.306 [INFO][4356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:01.309181 containerd[1448]: time="2025-07-12T00:18:01.308616404Z" level=info msg="TearDown network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\" successfully" Jul 12 00:18:01.309181 containerd[1448]: time="2025-07-12T00:18:01.308643685Z" level=info msg="StopPodSandbox for \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\" returns successfully" Jul 12 00:18:01.309354 containerd[1448]: time="2025-07-12T00:18:01.309304419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-9drtq,Uid:1ca0963e-ae80-4ed4-8ecd-1417da594c22,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.285 [INFO][4365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.285 [INFO][4365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" iface="eth0" netns="/var/run/netns/cni-b2d4f126-67a0-b1ef-010c-3bea8e0c2863" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.285 [INFO][4365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" iface="eth0" netns="/var/run/netns/cni-b2d4f126-67a0-b1ef-010c-3bea8e0c2863" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.285 [INFO][4365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" iface="eth0" netns="/var/run/netns/cni-b2d4f126-67a0-b1ef-010c-3bea8e0c2863" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.285 [INFO][4365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.285 [INFO][4365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.313 [INFO][4384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.313 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.313 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.324 [WARNING][4384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.324 [INFO][4384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.325 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:01.331009 containerd[1448]: 2025-07-12 00:18:01.327 [INFO][4365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:01.331646 containerd[1448]: time="2025-07-12T00:18:01.331111453Z" level=info msg="TearDown network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\" successfully" Jul 12 00:18:01.331646 containerd[1448]: time="2025-07-12T00:18:01.331137133Z" level=info msg="StopPodSandbox for \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\" returns successfully" Jul 12 00:18:01.331697 kubelet[2459]: E0712 00:18:01.331525 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:01.332191 containerd[1448]: time="2025-07-12T00:18:01.332152555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7xgr,Uid:74f3a898-fc16-41f9-a59d-febcf1761d1e,Namespace:kube-system,Attempt:1,}" Jul 12 00:18:01.347051 systemd[1]: run-netns-cni\x2db310f1c0\x2d95bd\x2d0592\x2d55cf\x2d429d00bd97e5.mount: Deactivated successfully. Jul 12 00:18:01.347157 systemd[1]: run-netns-cni\x2db2d4f126\x2d67a0\x2db1ef\x2d010c\x2d3bea8e0c2863.mount: Deactivated successfully. Jul 12 00:18:01.473667 systemd-networkd[1385]: cali4e8419d8bec: Link UP Jul 12 00:18:01.474548 systemd-networkd[1385]: cali4e8419d8bec: Gained carrier Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.391 [INFO][4396] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0 calico-apiserver-6bf689cfcd- calico-apiserver 1ca0963e-ae80-4ed4-8ecd-1417da594c22 929 0 2025-07-12 00:17:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf689cfcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bf689cfcd-9drtq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e8419d8bec [] [] }} ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.391 [INFO][4396] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.413 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" HandleID="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.413 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" HandleID="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005819b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bf689cfcd-9drtq", "timestamp":"2025-07-12 00:18:01.413215995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.413 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.413 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.413 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.422 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.428 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.431 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.433 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.435 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.435 [INFO][4410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.436 [INFO][4410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.462 [INFO][4410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.468 [INFO][4410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.468 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" host="localhost" Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.468 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:01.497425 containerd[1448]: 2025-07-12 00:18:01.469 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" HandleID="k8s-pod-network.c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.497935 containerd[1448]: 2025-07-12 00:18:01.471 [INFO][4396] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ca0963e-ae80-4ed4-8ecd-1417da594c22", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bf689cfcd-9drtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e8419d8bec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:01.497935 containerd[1448]: 2025-07-12 00:18:01.471 [INFO][4396] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.497935 containerd[1448]: 2025-07-12 00:18:01.471 [INFO][4396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e8419d8bec ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.497935 containerd[1448]: 2025-07-12 00:18:01.473 [INFO][4396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.497935 containerd[1448]: 2025-07-12 00:18:01.474 [INFO][4396] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ca0963e-ae80-4ed4-8ecd-1417da594c22", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab", Pod:"calico-apiserver-6bf689cfcd-9drtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e8419d8bec", MAC:"32:c6:89:82:41:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:01.497935 containerd[1448]: 2025-07-12 00:18:01.490 [INFO][4396] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-9drtq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:01.528199 containerd[1448]: time="2025-07-12T00:18:01.527725160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:01.528199 containerd[1448]: time="2025-07-12T00:18:01.528146489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:01.528199 containerd[1448]: time="2025-07-12T00:18:01.528158330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:01.528361 containerd[1448]: time="2025-07-12T00:18:01.528237691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:01.552591 systemd[1]: Started cri-containerd-c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab.scope - libcontainer container c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab. Jul 12 00:18:01.571107 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:01.603638 containerd[1448]: time="2025-07-12T00:18:01.603436603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-9drtq,Uid:1ca0963e-ae80-4ed4-8ecd-1417da594c22,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab\"" Jul 12 00:18:01.621420 systemd-networkd[1385]: calid06f8a479ac: Link UP Jul 12 00:18:01.622710 systemd-networkd[1385]: calid06f8a479ac: Gained carrier Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.532 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0 coredns-7c65d6cfc9- kube-system 74f3a898-fc16-41f9-a59d-febcf1761d1e 931 0 2025-07-12 00:17:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-k7xgr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid06f8a479ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.532 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.571 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" HandleID="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.571 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" HandleID="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-k7xgr", "timestamp":"2025-07-12 00:18:01.571606713 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.571 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.571 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.571 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.581 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.585 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.600 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.602 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.604 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.604 [INFO][4465] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.606 [INFO][4465] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5 Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.609 [INFO][4465] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.615 [INFO][4465] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.615 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" host="localhost" Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.615 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:01.635765 containerd[1448]: 2025-07-12 00:18:01.615 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" HandleID="k8s-pod-network.9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.636307 containerd[1448]: 2025-07-12 00:18:01.618 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74f3a898-fc16-41f9-a59d-febcf1761d1e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-k7xgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid06f8a479ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:01.636307 containerd[1448]: 2025-07-12 00:18:01.618 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.636307 containerd[1448]: 2025-07-12 00:18:01.618 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid06f8a479ac ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.636307 containerd[1448]: 2025-07-12 00:18:01.621 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.636307 containerd[1448]: 2025-07-12 00:18:01.623 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74f3a898-fc16-41f9-a59d-febcf1761d1e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5", Pod:"coredns-7c65d6cfc9-k7xgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid06f8a479ac", MAC:"9a:ad:2b:26:ab:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:01.636307 containerd[1448]: 2025-07-12 00:18:01.633 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7xgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:01.667005 containerd[1448]: time="2025-07-12T00:18:01.664887937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:01.667005 containerd[1448]: time="2025-07-12T00:18:01.664953059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:01.667005 containerd[1448]: time="2025-07-12T00:18:01.664963339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:01.667005 containerd[1448]: time="2025-07-12T00:18:01.665047981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:01.685570 systemd[1]: Started cri-containerd-9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5.scope - libcontainer container 9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5. Jul 12 00:18:01.701914 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:01.745202 containerd[1448]: time="2025-07-12T00:18:01.744870913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7xgr,Uid:74f3a898-fc16-41f9-a59d-febcf1761d1e,Namespace:kube-system,Attempt:1,} returns sandbox id \"9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5\"" Jul 12 00:18:01.748362 kubelet[2459]: E0712 00:18:01.745740 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:01.751657 containerd[1448]: time="2025-07-12T00:18:01.749515414Z" level=info msg="CreateContainer within sandbox \"9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:18:01.809575 systemd-networkd[1385]: cali16dd4202b4d: Gained IPv6LL Jul 12 00:18:01.810779 containerd[1448]: time="2025-07-12T00:18:01.810651141Z" level=info msg="CreateContainer within sandbox \"9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3e25c08b0d05c5b3413f0400942763b88d5d936aa60109fc20bec9bc6122a22\"" Jul 12 00:18:01.811706 containerd[1448]: time="2025-07-12T00:18:01.811676883Z" level=info msg="StartContainer for \"c3e25c08b0d05c5b3413f0400942763b88d5d936aa60109fc20bec9bc6122a22\"" Jul 12 00:18:01.844600 systemd[1]: Started cri-containerd-c3e25c08b0d05c5b3413f0400942763b88d5d936aa60109fc20bec9bc6122a22.scope - libcontainer container c3e25c08b0d05c5b3413f0400942763b88d5d936aa60109fc20bec9bc6122a22. Jul 12 00:18:01.880438 containerd[1448]: time="2025-07-12T00:18:01.880368934Z" level=info msg="StartContainer for \"c3e25c08b0d05c5b3413f0400942763b88d5d936aa60109fc20bec9bc6122a22\" returns successfully" Jul 12 00:18:02.187996 containerd[1448]: time="2025-07-12T00:18:02.187956286Z" level=info msg="StopPodSandbox for \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\"" Jul 12 00:18:02.188204 containerd[1448]: time="2025-07-12T00:18:02.188137570Z" level=info msg="StopPodSandbox for \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\"" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.238 [INFO][4599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.239 [INFO][4599] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" iface="eth0" netns="/var/run/netns/cni-9af1b2f0-c84b-6cb4-f57f-01b6885acd5f" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.239 [INFO][4599] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" iface="eth0" netns="/var/run/netns/cni-9af1b2f0-c84b-6cb4-f57f-01b6885acd5f" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.239 [INFO][4599] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" iface="eth0" netns="/var/run/netns/cni-9af1b2f0-c84b-6cb4-f57f-01b6885acd5f" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.239 [INFO][4599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.239 [INFO][4599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.266 [INFO][4619] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.266 [INFO][4619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.266 [INFO][4619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.280 [WARNING][4619] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.280 [INFO][4619] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.282 [INFO][4619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:02.287253 containerd[1448]: 2025-07-12 00:18:02.285 [INFO][4599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:02.288313 containerd[1448]: time="2025-07-12T00:18:02.287295217Z" level=info msg="TearDown network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\" successfully" Jul 12 00:18:02.288313 containerd[1448]: time="2025-07-12T00:18:02.287324017Z" level=info msg="StopPodSandbox for \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\" returns successfully" Jul 12 00:18:02.288313 containerd[1448]: time="2025-07-12T00:18:02.288103914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-j9lf2,Uid:81143345-0e90-4512-9092-036342474e19,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.265 [INFO][4608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.265 [INFO][4608] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" iface="eth0" netns="/var/run/netns/cni-27145d5c-4e99-95db-369a-1ae86cf02e60" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.265 [INFO][4608] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" iface="eth0" netns="/var/run/netns/cni-27145d5c-4e99-95db-369a-1ae86cf02e60" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.266 [INFO][4608] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" iface="eth0" netns="/var/run/netns/cni-27145d5c-4e99-95db-369a-1ae86cf02e60" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.266 [INFO][4608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.267 [INFO][4608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.293 [INFO][4629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.293 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.293 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.309 [WARNING][4629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.309 [INFO][4629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.312 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:02.319489 containerd[1448]: 2025-07-12 00:18:02.316 [INFO][4608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:02.320374 containerd[1448]: time="2025-07-12T00:18:02.319627337Z" level=info msg="TearDown network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\" successfully" Jul 12 00:18:02.320374 containerd[1448]: time="2025-07-12T00:18:02.319656898Z" level=info msg="StopPodSandbox for \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\" returns successfully" Jul 12 00:18:02.320374 containerd[1448]: time="2025-07-12T00:18:02.320375473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfgr2,Uid:015b368c-3c89-4707-af85-1b98a6fb48da,Namespace:calico-system,Attempt:1,}" Jul 12 00:18:02.355629 systemd[1]: run-netns-cni\x2d27145d5c\x2d4e99\x2d95db\x2d369a\x2d1ae86cf02e60.mount: Deactivated successfully. Jul 12 00:18:02.355722 systemd[1]: run-netns-cni\x2d9af1b2f0\x2dc84b\x2d6cb4\x2df57f\x2d01b6885acd5f.mount: Deactivated successfully. Jul 12 00:18:02.449229 containerd[1448]: time="2025-07-12T00:18:02.449123102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:02.450887 containerd[1448]: time="2025-07-12T00:18:02.450544252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:18:02.451514 systemd-networkd[1385]: cali89a1f8dbc38: Link UP Jul 12 00:18:02.453838 systemd-networkd[1385]: cali89a1f8dbc38: Gained carrier Jul 12 00:18:02.459452 containerd[1448]: time="2025-07-12T00:18:02.458855067Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:02.461786 containerd[1448]: time="2025-07-12T00:18:02.461730047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:02.462992 containerd[1448]: time="2025-07-12T00:18:02.462950953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.813006493s" Jul 12 00:18:02.462992 containerd[1448]: time="2025-07-12T00:18:02.462989274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:18:02.467203 containerd[1448]: time="2025-07-12T00:18:02.467167162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.370 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0 calico-apiserver-6bf689cfcd- calico-apiserver 81143345-0e90-4512-9092-036342474e19 948 0 2025-07-12 00:17:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf689cfcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bf689cfcd-j9lf2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89a1f8dbc38 [] [] }} ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.370 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.403 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" HandleID="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.403 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" HandleID="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001377c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bf689cfcd-j9lf2", "timestamp":"2025-07-12 00:18:02.403561583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.403 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.403 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.404 [INFO][4667] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.417 [INFO][4667] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.422 [INFO][4667] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.426 [INFO][4667] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.428 [INFO][4667] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.431 [INFO][4667] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.431 [INFO][4667] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.433 [INFO][4667] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.436 [INFO][4667] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.444 [INFO][4667] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.444 [INFO][4667] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" host="localhost" Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.444 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:02.469129 containerd[1448]: 2025-07-12 00:18:02.444 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" HandleID="k8s-pod-network.5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.469984 containerd[1448]: 2025-07-12 00:18:02.447 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"81143345-0e90-4512-9092-036342474e19", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bf689cfcd-j9lf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89a1f8dbc38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:02.469984 containerd[1448]: 2025-07-12 00:18:02.447 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.469984 containerd[1448]: 2025-07-12 00:18:02.447 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89a1f8dbc38 ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.469984 containerd[1448]: 2025-07-12 00:18:02.454 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.469984 containerd[1448]: 2025-07-12 00:18:02.454 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"81143345-0e90-4512-9092-036342474e19", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df", Pod:"calico-apiserver-6bf689cfcd-j9lf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89a1f8dbc38", MAC:"a6:bf:ff:71:db:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:02.469984 containerd[1448]: 2025-07-12 00:18:02.464 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df" Namespace="calico-apiserver" Pod="calico-apiserver-6bf689cfcd-j9lf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:02.483291 containerd[1448]: time="2025-07-12T00:18:02.482972174Z" level=info msg="CreateContainer within sandbox \"3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:18:02.501369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402626883.mount: Deactivated successfully. Jul 12 00:18:02.504038 containerd[1448]: time="2025-07-12T00:18:02.503779772Z" level=info msg="CreateContainer within sandbox \"3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"33ba7c1762ab8e6130a488ec527aa23b80aea24dab080fc49cfcaea7c50a9a44\"" Jul 12 00:18:02.504750 containerd[1448]: time="2025-07-12T00:18:02.504722672Z" level=info msg="StartContainer for \"33ba7c1762ab8e6130a488ec527aa23b80aea24dab080fc49cfcaea7c50a9a44\"" Jul 12 00:18:02.512067 containerd[1448]: time="2025-07-12T00:18:02.511795581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:02.512067 containerd[1448]: time="2025-07-12T00:18:02.511883343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:02.512067 containerd[1448]: time="2025-07-12T00:18:02.511894823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:02.512067 containerd[1448]: time="2025-07-12T00:18:02.511982065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:02.536888 systemd[1]: Started cri-containerd-33ba7c1762ab8e6130a488ec527aa23b80aea24dab080fc49cfcaea7c50a9a44.scope - libcontainer container 33ba7c1762ab8e6130a488ec527aa23b80aea24dab080fc49cfcaea7c50a9a44. Jul 12 00:18:02.545561 systemd[1]: Started cri-containerd-5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df.scope - libcontainer container 5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df. Jul 12 00:18:02.568845 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:02.574136 systemd-networkd[1385]: cali28c6b745c4c: Link UP Jul 12 00:18:02.574627 systemd-networkd[1385]: cali28c6b745c4c: Gained carrier Jul 12 00:18:02.591031 containerd[1448]: time="2025-07-12T00:18:02.590964407Z" level=info msg="StartContainer for \"33ba7c1762ab8e6130a488ec527aa23b80aea24dab080fc49cfcaea7c50a9a44\" returns successfully" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.372 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wfgr2-eth0 csi-node-driver- calico-system 015b368c-3c89-4707-af85-1b98a6fb48da 949 0 2025-07-12 00:17:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wfgr2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28c6b745c4c [] [] }} ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.372 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.403 [INFO][4673] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" HandleID="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.404 [INFO][4673] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" HandleID="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059c830), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wfgr2", "timestamp":"2025-07-12 00:18:02.403840149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.404 [INFO][4673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.444 [INFO][4673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.444 [INFO][4673] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.516 [INFO][4673] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.524 [INFO][4673] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.534 [INFO][4673] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.537 [INFO][4673] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.540 [INFO][4673] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.540 [INFO][4673] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.547 [INFO][4673] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09 Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.557 [INFO][4673] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.567 [INFO][4673] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.568 [INFO][4673] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" host="localhost" Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.568 [INFO][4673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:02.602897 containerd[1448]: 2025-07-12 00:18:02.568 [INFO][4673] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" HandleID="k8s-pod-network.787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.603650 containerd[1448]: 2025-07-12 00:18:02.571 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wfgr2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"015b368c-3c89-4707-af85-1b98a6fb48da", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wfgr2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28c6b745c4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:02.603650 containerd[1448]: 2025-07-12 00:18:02.571 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.603650 containerd[1448]: 2025-07-12 00:18:02.571 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28c6b745c4c ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.603650 containerd[1448]: 2025-07-12 00:18:02.574 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.603650 containerd[1448]: 2025-07-12 00:18:02.575 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wfgr2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"015b368c-3c89-4707-af85-1b98a6fb48da", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09", Pod:"csi-node-driver-wfgr2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28c6b745c4c", MAC:"6a:c1:ba:44:8c:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:02.603650 containerd[1448]: 2025-07-12 00:18:02.592 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09" Namespace="calico-system" Pod="csi-node-driver-wfgr2" WorkloadEndpoint="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:02.613812 containerd[1448]: time="2025-07-12T00:18:02.613759567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf689cfcd-j9lf2,Uid:81143345-0e90-4512-9092-036342474e19,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df\"" Jul 12 00:18:02.631712 containerd[1448]: time="2025-07-12T00:18:02.631138572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:02.631712 containerd[1448]: time="2025-07-12T00:18:02.631210694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:02.631712 containerd[1448]: time="2025-07-12T00:18:02.631226294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:02.631712 containerd[1448]: time="2025-07-12T00:18:02.631320696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:02.642865 systemd-networkd[1385]: calid06f8a479ac: Gained IPv6LL Jul 12 00:18:02.652578 systemd[1]: Started cri-containerd-787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09.scope - libcontainer container 787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09. Jul 12 00:18:02.668664 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:02.680297 kubelet[2459]: E0712 00:18:02.678916 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:02.701552 kubelet[2459]: I0712 00:18:02.701370 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b5f84d77b-mwdk4" podStartSLOduration=19.884303871 podStartE2EDuration="21.70135333s" podCreationTimestamp="2025-07-12 00:17:41 +0000 UTC" firstStartedPulling="2025-07-12 00:18:00.649688854 +0000 UTC m=+37.539369670" lastFinishedPulling="2025-07-12 00:18:02.466738313 +0000 UTC m=+39.356419129" observedRunningTime="2025-07-12 00:18:02.683238029 +0000 UTC m=+39.572918845" watchObservedRunningTime="2025-07-12 00:18:02.70135333 +0000 UTC m=+39.591034146" Jul 12 00:18:02.739868 containerd[1448]: time="2025-07-12T00:18:02.739729777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wfgr2,Uid:015b368c-3c89-4707-af85-1b98a6fb48da,Namespace:calico-system,Attempt:1,} returns sandbox id \"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09\"" Jul 12 00:18:02.858424 kubelet[2459]: I0712 00:18:02.858325 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-k7xgr" podStartSLOduration=34.858303593 podStartE2EDuration="34.858303593s" podCreationTimestamp="2025-07-12 00:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:18:02.701177166 +0000 UTC m=+39.590857982" watchObservedRunningTime="2025-07-12 00:18:02.858303593 +0000 UTC m=+39.747984409" Jul 12 00:18:03.195991 containerd[1448]: time="2025-07-12T00:18:03.195828734Z" level=info msg="StopPodSandbox for \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\"" Jul 12 00:18:03.258706 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:42694.service - OpenSSH per-connection server daemon (10.0.0.1:42694). Jul 12 00:18:03.328813 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 42694 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:03.333035 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.254 [INFO][4871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.254 [INFO][4871] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" iface="eth0" netns="/var/run/netns/cni-5a6735de-924c-9610-abd8-0d5ce6677dc6" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.254 [INFO][4871] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" iface="eth0" netns="/var/run/netns/cni-5a6735de-924c-9610-abd8-0d5ce6677dc6" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.255 [INFO][4871] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" iface="eth0" netns="/var/run/netns/cni-5a6735de-924c-9610-abd8-0d5ce6677dc6" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.255 [INFO][4871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.255 [INFO][4871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.324 [INFO][4881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.324 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.324 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.335 [WARNING][4881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.335 [INFO][4881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.338 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:03.352858 containerd[1448]: 2025-07-12 00:18:03.340 [INFO][4871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:03.354088 containerd[1448]: time="2025-07-12T00:18:03.353965644Z" level=info msg="TearDown network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\" successfully" Jul 12 00:18:03.354088 containerd[1448]: time="2025-07-12T00:18:03.353998845Z" level=info msg="StopPodSandbox for \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\" returns successfully" Jul 12 00:18:03.354730 kubelet[2459]: E0712 00:18:03.354698 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:03.356507 systemd[1]: run-netns-cni\x2d5a6735de\x2d924c\x2d9610\x2dabd8\x2d0d5ce6677dc6.mount: Deactivated successfully. Jul 12 00:18:03.357611 containerd[1448]: time="2025-07-12T00:18:03.357471636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-knnq4,Uid:c50cf70e-483e-49bb-a8b2-b017faf73702,Namespace:kube-system,Attempt:1,}" Jul 12 00:18:03.363503 systemd-logind[1428]: New session 8 of user core. Jul 12 00:18:03.371886 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:18:03.409521 systemd-networkd[1385]: cali4e8419d8bec: Gained IPv6LL Jul 12 00:18:03.599087 systemd-networkd[1385]: cali208b141b9a2: Link UP Jul 12 00:18:03.599888 systemd-networkd[1385]: cali208b141b9a2: Gained carrier Jul 12 00:18:03.601652 systemd-networkd[1385]: cali89a1f8dbc38: Gained IPv6LL Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.501 [INFO][4911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0 coredns-7c65d6cfc9- kube-system c50cf70e-483e-49bb-a8b2-b017faf73702 1009 0 2025-07-12 00:17:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-knnq4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali208b141b9a2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.501 [INFO][4911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.544 [INFO][4920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" HandleID="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.544 [INFO][4920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" HandleID="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-knnq4", "timestamp":"2025-07-12 00:18:03.544081367 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.544 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.544 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.544 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.555 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.561 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.568 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.570 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.574 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.574 [INFO][4920] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.576 [INFO][4920] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846 Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.580 [INFO][4920] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.590 [INFO][4920] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.590 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" host="localhost" Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.590 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:03.620810 containerd[1448]: 2025-07-12 00:18:03.590 [INFO][4920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" HandleID="k8s-pod-network.2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.621822 containerd[1448]: 2025-07-12 00:18:03.592 [INFO][4911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c50cf70e-483e-49bb-a8b2-b017faf73702", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-knnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali208b141b9a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:03.621822 containerd[1448]: 2025-07-12 00:18:03.593 [INFO][4911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.621822 containerd[1448]: 2025-07-12 00:18:03.593 [INFO][4911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali208b141b9a2 ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.621822 containerd[1448]: 2025-07-12 00:18:03.600 [INFO][4911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.621822 containerd[1448]: 2025-07-12 00:18:03.603 [INFO][4911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c50cf70e-483e-49bb-a8b2-b017faf73702", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846", Pod:"coredns-7c65d6cfc9-knnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali208b141b9a2", MAC:"02:4a:f9:9f:64:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:03.621822 containerd[1448]: 2025-07-12 00:18:03.613 [INFO][4911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846" Namespace="kube-system" Pod="coredns-7c65d6cfc9-knnq4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:03.646614 containerd[1448]: time="2025-07-12T00:18:03.646396937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:03.646614 containerd[1448]: time="2025-07-12T00:18:03.646508299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:03.646614 containerd[1448]: time="2025-07-12T00:18:03.646532699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:03.646950 containerd[1448]: time="2025-07-12T00:18:03.646885627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:03.668810 sshd[4879]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:03.670608 systemd[1]: Started cri-containerd-2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846.scope - libcontainer container 2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846. Jul 12 00:18:03.673867 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:42694.service: Deactivated successfully. Jul 12 00:18:03.676534 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:18:03.677611 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:18:03.679329 systemd-logind[1428]: Removed session 8. Jul 12 00:18:03.684331 kubelet[2459]: E0712 00:18:03.684299 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:03.689321 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:03.713853 containerd[1448]: time="2025-07-12T00:18:03.713729632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-knnq4,Uid:c50cf70e-483e-49bb-a8b2-b017faf73702,Namespace:kube-system,Attempt:1,} returns sandbox id \"2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846\"" Jul 12 00:18:03.714778 kubelet[2459]: E0712 00:18:03.714756 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:03.717849 containerd[1448]: time="2025-07-12T00:18:03.717814275Z" level=info msg="CreateContainer within sandbox \"2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:18:03.735071 containerd[1448]: time="2025-07-12T00:18:03.735022227Z" level=info msg="CreateContainer within sandbox \"2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e41c7700781b3f3448d260907f991ff7cec46e49a27a30665dd20fb3e14ab669\"" Jul 12 00:18:03.736066 containerd[1448]: time="2025-07-12T00:18:03.736036807Z" level=info msg="StartContainer for \"e41c7700781b3f3448d260907f991ff7cec46e49a27a30665dd20fb3e14ab669\"" Jul 12 00:18:03.763593 systemd[1]: Started cri-containerd-e41c7700781b3f3448d260907f991ff7cec46e49a27a30665dd20fb3e14ab669.scope - libcontainer container e41c7700781b3f3448d260907f991ff7cec46e49a27a30665dd20fb3e14ab669. Jul 12 00:18:03.805881 containerd[1448]: time="2025-07-12T00:18:03.805505786Z" level=info msg="StartContainer for \"e41c7700781b3f3448d260907f991ff7cec46e49a27a30665dd20fb3e14ab669\" returns successfully" Jul 12 00:18:03.931339 kubelet[2459]: I0712 00:18:03.931182 2459 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:18:04.232497 containerd[1448]: time="2025-07-12T00:18:04.232016562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:04.232775 containerd[1448]: time="2025-07-12T00:18:04.232737377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:18:04.240215 containerd[1448]: time="2025-07-12T00:18:04.240140964Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:04.243326 containerd[1448]: time="2025-07-12T00:18:04.243278386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:04.244211 containerd[1448]: time="2025-07-12T00:18:04.244170484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.776963241s" Jul 12 00:18:04.244262 containerd[1448]: time="2025-07-12T00:18:04.244208884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:18:04.245724 containerd[1448]: time="2025-07-12T00:18:04.245688074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:18:04.247047 containerd[1448]: time="2025-07-12T00:18:04.247020340Z" level=info msg="CreateContainer within sandbox \"c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:18:04.256851 containerd[1448]: time="2025-07-12T00:18:04.256796974Z" level=info msg="CreateContainer within sandbox \"c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bae726ad45007ebeba2f2918680c468d461b5a316fcffb7bc35008b0e9729547\"" Jul 12 00:18:04.257515 containerd[1448]: time="2025-07-12T00:18:04.257489268Z" level=info msg="StartContainer for \"bae726ad45007ebeba2f2918680c468d461b5a316fcffb7bc35008b0e9729547\"" Jul 12 00:18:04.287609 systemd[1]: Started cri-containerd-bae726ad45007ebeba2f2918680c468d461b5a316fcffb7bc35008b0e9729547.scope - libcontainer container bae726ad45007ebeba2f2918680c468d461b5a316fcffb7bc35008b0e9729547. Jul 12 00:18:04.306972 systemd-networkd[1385]: cali28c6b745c4c: Gained IPv6LL Jul 12 00:18:04.320715 containerd[1448]: time="2025-07-12T00:18:04.320671962Z" level=info msg="StartContainer for \"bae726ad45007ebeba2f2918680c468d461b5a316fcffb7bc35008b0e9729547\" returns successfully" Jul 12 00:18:04.536645 containerd[1448]: time="2025-07-12T00:18:04.535996874Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:04.537855 containerd[1448]: time="2025-07-12T00:18:04.537805070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:18:04.556991 containerd[1448]: time="2025-07-12T00:18:04.556933690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 311.204215ms" Jul 12 00:18:04.556991 containerd[1448]: time="2025-07-12T00:18:04.556986251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:18:04.558890 containerd[1448]: time="2025-07-12T00:18:04.558768726Z" level=info msg="CreateContainer within sandbox \"5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:18:04.559663 containerd[1448]: time="2025-07-12T00:18:04.559608463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:18:04.580286 containerd[1448]: time="2025-07-12T00:18:04.580139230Z" level=info msg="CreateContainer within sandbox \"5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de647f1976b4a3a2524a6cd0d7eb8b7ad016bf11d80e234c2546c7ad23a9f71e\"" Jul 12 00:18:04.581080 containerd[1448]: time="2025-07-12T00:18:04.581047328Z" level=info msg="StartContainer for \"de647f1976b4a3a2524a6cd0d7eb8b7ad016bf11d80e234c2546c7ad23a9f71e\"" Jul 12 00:18:04.617618 systemd[1]: Started cri-containerd-de647f1976b4a3a2524a6cd0d7eb8b7ad016bf11d80e234c2546c7ad23a9f71e.scope - libcontainer container de647f1976b4a3a2524a6cd0d7eb8b7ad016bf11d80e234c2546c7ad23a9f71e. Jul 12 00:18:04.694140 containerd[1448]: time="2025-07-12T00:18:04.694101811Z" level=info msg="StartContainer for \"de647f1976b4a3a2524a6cd0d7eb8b7ad016bf11d80e234c2546c7ad23a9f71e\" returns successfully" Jul 12 00:18:04.701632 kubelet[2459]: E0712 00:18:04.701339 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:04.705783 kubelet[2459]: E0712 00:18:04.705659 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:04.743602 kubelet[2459]: I0712 00:18:04.743170 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-knnq4" podStartSLOduration=36.743141664 podStartE2EDuration="36.743141664s" podCreationTimestamp="2025-07-12 00:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:18:04.716832102 +0000 UTC m=+41.606512878" watchObservedRunningTime="2025-07-12 00:18:04.743141664 +0000 UTC m=+41.632822480" Jul 12 00:18:04.748570 kubelet[2459]: I0712 00:18:04.748461 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bf689cfcd-9drtq" podStartSLOduration=25.108498708 podStartE2EDuration="27.74844221s" podCreationTimestamp="2025-07-12 00:17:37 +0000 UTC" firstStartedPulling="2025-07-12 00:18:01.605094679 +0000 UTC m=+38.494775455" lastFinishedPulling="2025-07-12 00:18:04.245038141 +0000 UTC m=+41.134718957" observedRunningTime="2025-07-12 00:18:04.747036662 +0000 UTC m=+41.636717478" watchObservedRunningTime="2025-07-12 00:18:04.74844221 +0000 UTC m=+41.638123026" Jul 12 00:18:05.187982 containerd[1448]: time="2025-07-12T00:18:05.187926868Z" level=info msg="StopPodSandbox for \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\"" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.240 [INFO][5196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.240 [INFO][5196] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" iface="eth0" netns="/var/run/netns/cni-0ba18f3e-2f76-06d2-86a9-e8f712a6c405" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.241 [INFO][5196] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" iface="eth0" netns="/var/run/netns/cni-0ba18f3e-2f76-06d2-86a9-e8f712a6c405" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.242 [INFO][5196] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" iface="eth0" netns="/var/run/netns/cni-0ba18f3e-2f76-06d2-86a9-e8f712a6c405" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.242 [INFO][5196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.242 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.268 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.268 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.268 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.277 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.277 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.278 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:05.283243 containerd[1448]: 2025-07-12 00:18:05.281 [INFO][5196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:05.283831 containerd[1448]: time="2025-07-12T00:18:05.283477712Z" level=info msg="TearDown network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\" successfully" Jul 12 00:18:05.283831 containerd[1448]: time="2025-07-12T00:18:05.283531593Z" level=info msg="StopPodSandbox for \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\" returns successfully" Jul 12 00:18:05.284554 containerd[1448]: time="2025-07-12T00:18:05.284524332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jmdtm,Uid:26782363-597c-473e-9a7b-6c89373057d1,Namespace:calico-system,Attempt:1,}" Jul 12 00:18:05.286211 systemd[1]: run-netns-cni\x2d0ba18f3e\x2d2f76\x2d06d2\x2d86a9\x2de8f712a6c405.mount: Deactivated successfully. Jul 12 00:18:05.330486 systemd-networkd[1385]: cali208b141b9a2: Gained IPv6LL Jul 12 00:18:05.449604 systemd-networkd[1385]: cali5b8609c3d73: Link UP Jul 12 00:18:05.449850 systemd-networkd[1385]: cali5b8609c3d73: Gained carrier Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.357 [INFO][5212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0 goldmane-58fd7646b9- calico-system 26782363-597c-473e-9a7b-6c89373057d1 1051 0 2025-07-12 00:17:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-jmdtm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5b8609c3d73 [] [] }} ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.357 [INFO][5212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.391 [INFO][5227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" HandleID="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.391 [INFO][5227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" HandleID="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b76a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-jmdtm", "timestamp":"2025-07-12 00:18:05.391125949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.391 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.391 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.391 [INFO][5227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.404 [INFO][5227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.409 [INFO][5227] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.414 [INFO][5227] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.416 [INFO][5227] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.419 [INFO][5227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.419 [INFO][5227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.422 [INFO][5227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.426 [INFO][5227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.435 [INFO][5227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.436 [INFO][5227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" host="localhost" Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.436 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:05.468545 containerd[1448]: 2025-07-12 00:18:05.436 [INFO][5227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" HandleID="k8s-pod-network.ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.469296 containerd[1448]: 2025-07-12 00:18:05.441 [INFO][5212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"26782363-597c-473e-9a7b-6c89373057d1", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-jmdtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b8609c3d73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:05.469296 containerd[1448]: 2025-07-12 00:18:05.441 [INFO][5212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.469296 containerd[1448]: 2025-07-12 00:18:05.441 [INFO][5212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b8609c3d73 ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.469296 containerd[1448]: 2025-07-12 00:18:05.447 [INFO][5212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.469296 containerd[1448]: 2025-07-12 00:18:05.449 [INFO][5212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"26782363-597c-473e-9a7b-6c89373057d1", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d", Pod:"goldmane-58fd7646b9-jmdtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b8609c3d73", MAC:"b6:38:10:a1:8c:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:05.469296 containerd[1448]: 2025-07-12 00:18:05.464 [INFO][5212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d" Namespace="calico-system" Pod="goldmane-58fd7646b9-jmdtm" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:05.498208 containerd[1448]: time="2025-07-12T00:18:05.498111174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:05.498208 containerd[1448]: time="2025-07-12T00:18:05.498169975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:05.498208 containerd[1448]: time="2025-07-12T00:18:05.498181895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:05.498620 containerd[1448]: time="2025-07-12T00:18:05.498256656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:05.527723 systemd[1]: Started cri-containerd-ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d.scope - libcontainer container ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d. Jul 12 00:18:05.545366 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:05.577931 containerd[1448]: time="2025-07-12T00:18:05.577883713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jmdtm,Uid:26782363-597c-473e-9a7b-6c89373057d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d\"" Jul 12 00:18:05.709325 kubelet[2459]: I0712 00:18:05.708474 2459 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:18:05.709325 kubelet[2459]: E0712 00:18:05.708997 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:05.727093 kubelet[2459]: I0712 00:18:05.727037 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bf689cfcd-j9lf2" podStartSLOduration=26.78578387 podStartE2EDuration="28.727019991s" podCreationTimestamp="2025-07-12 00:17:37 +0000 UTC" firstStartedPulling="2025-07-12 00:18:02.616485984 +0000 UTC m=+39.506166760" lastFinishedPulling="2025-07-12 00:18:04.557722065 +0000 UTC m=+41.447402881" observedRunningTime="2025-07-12 00:18:05.726888148 +0000 UTC m=+42.616568964" watchObservedRunningTime="2025-07-12 00:18:05.727019991 +0000 UTC m=+42.616700807" Jul 12 00:18:05.921159 containerd[1448]: time="2025-07-12T00:18:05.920347962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:05.922423 containerd[1448]: time="2025-07-12T00:18:05.922346560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:18:05.923521 containerd[1448]: time="2025-07-12T00:18:05.923488062Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:05.925923 containerd[1448]: time="2025-07-12T00:18:05.925893309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:05.927434 containerd[1448]: time="2025-07-12T00:18:05.927373937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.367723874s" Jul 12 00:18:05.927581 containerd[1448]: time="2025-07-12T00:18:05.927563301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:18:05.928573 containerd[1448]: time="2025-07-12T00:18:05.928546320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:18:05.931661 containerd[1448]: time="2025-07-12T00:18:05.931630219Z" level=info msg="CreateContainer within sandbox \"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:18:05.950667 containerd[1448]: time="2025-07-12T00:18:05.950592625Z" level=info msg="CreateContainer within sandbox \"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c4d221c4a4c455ff71ab93e89c1fbc7cf101fa33caa16fc83dcedfa9bbda71af\"" Jul 12 00:18:05.951596 containerd[1448]: time="2025-07-12T00:18:05.951202557Z" level=info msg="StartContainer for \"c4d221c4a4c455ff71ab93e89c1fbc7cf101fa33caa16fc83dcedfa9bbda71af\"" Jul 12 00:18:05.991584 systemd[1]: Started cri-containerd-c4d221c4a4c455ff71ab93e89c1fbc7cf101fa33caa16fc83dcedfa9bbda71af.scope - libcontainer container c4d221c4a4c455ff71ab93e89c1fbc7cf101fa33caa16fc83dcedfa9bbda71af. Jul 12 00:18:06.053025 containerd[1448]: time="2025-07-12T00:18:06.052804731Z" level=info msg="StartContainer for \"c4d221c4a4c455ff71ab93e89c1fbc7cf101fa33caa16fc83dcedfa9bbda71af\" returns successfully" Jul 12 00:18:06.712640 kubelet[2459]: E0712 00:18:06.712612 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:07.121563 systemd-networkd[1385]: cali5b8609c3d73: Gained IPv6LL Jul 12 00:18:07.403793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901220942.mount: Deactivated successfully. Jul 12 00:18:07.909932 containerd[1448]: time="2025-07-12T00:18:07.908866246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:07.910461 containerd[1448]: time="2025-07-12T00:18:07.910432435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:18:07.912132 containerd[1448]: time="2025-07-12T00:18:07.912092185Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:07.915909 containerd[1448]: time="2025-07-12T00:18:07.915850294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:07.916819 containerd[1448]: time="2025-07-12T00:18:07.916779791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.988055548s" Jul 12 00:18:07.916991 containerd[1448]: time="2025-07-12T00:18:07.916917393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:18:07.919448 containerd[1448]: time="2025-07-12T00:18:07.918749227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:18:07.920174 containerd[1448]: time="2025-07-12T00:18:07.920076611Z" level=info msg="CreateContainer within sandbox \"ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:18:07.939843 containerd[1448]: time="2025-07-12T00:18:07.939676530Z" level=info msg="CreateContainer within sandbox \"ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c735da3486ddf8f411fa092d8aa4fb0ac5e5d6a539fd909169504c32e407ae49\"" Jul 12 00:18:07.941959 containerd[1448]: time="2025-07-12T00:18:07.941903251Z" level=info msg="StartContainer for \"c735da3486ddf8f411fa092d8aa4fb0ac5e5d6a539fd909169504c32e407ae49\"" Jul 12 00:18:07.980581 systemd[1]: Started cri-containerd-c735da3486ddf8f411fa092d8aa4fb0ac5e5d6a539fd909169504c32e407ae49.scope - libcontainer container c735da3486ddf8f411fa092d8aa4fb0ac5e5d6a539fd909169504c32e407ae49. Jul 12 00:18:08.033482 containerd[1448]: time="2025-07-12T00:18:08.033433032Z" level=info msg="StartContainer for \"c735da3486ddf8f411fa092d8aa4fb0ac5e5d6a539fd909169504c32e407ae49\" returns successfully" Jul 12 00:18:08.681565 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:42710.service - OpenSSH per-connection server daemon (10.0.0.1:42710). Jul 12 00:18:08.806978 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 42710 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:08.812040 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:08.819508 systemd-logind[1428]: New session 9 of user core. Jul 12 00:18:08.825592 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:18:09.032859 containerd[1448]: time="2025-07-12T00:18:09.031993213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:18:09.033463 containerd[1448]: time="2025-07-12T00:18:09.033438398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:09.041117 containerd[1448]: time="2025-07-12T00:18:09.039083256Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:09.041117 containerd[1448]: time="2025-07-12T00:18:09.039936031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.121151844s" Jul 12 00:18:09.041117 containerd[1448]: time="2025-07-12T00:18:09.039969272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:18:09.041117 containerd[1448]: time="2025-07-12T00:18:09.040496721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:18:09.047913 containerd[1448]: time="2025-07-12T00:18:09.047876810Z" level=info msg="CreateContainer within sandbox \"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:18:09.078796 containerd[1448]: time="2025-07-12T00:18:09.078736588Z" level=info msg="CreateContainer within sandbox \"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"de7ae601eb1d6a08cfebfc7385dfcdb738c8be50cd718a70a09463caa0793c42\"" Jul 12 00:18:09.079625 containerd[1448]: time="2025-07-12T00:18:09.079306558Z" level=info msg="StartContainer for \"de7ae601eb1d6a08cfebfc7385dfcdb738c8be50cd718a70a09463caa0793c42\"" Jul 12 00:18:09.131576 systemd[1]: Started cri-containerd-de7ae601eb1d6a08cfebfc7385dfcdb738c8be50cd718a70a09463caa0793c42.scope - libcontainer container de7ae601eb1d6a08cfebfc7385dfcdb738c8be50cd718a70a09463caa0793c42. Jul 12 00:18:09.162800 containerd[1448]: time="2025-07-12T00:18:09.162747173Z" level=info msg="StartContainer for \"de7ae601eb1d6a08cfebfc7385dfcdb738c8be50cd718a70a09463caa0793c42\" returns successfully" Jul 12 00:18:09.308889 kubelet[2459]: I0712 00:18:09.308771 2459 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:18:09.309369 kubelet[2459]: I0712 00:18:09.308874 2459 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:18:09.443194 sshd[5383]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:09.446758 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:42710.service: Deactivated successfully. Jul 12 00:18:09.448744 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:18:09.449627 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:18:09.450523 systemd-logind[1428]: Removed session 9. Jul 12 00:18:09.758906 systemd[1]: run-containerd-runc-k8s.io-de7ae601eb1d6a08cfebfc7385dfcdb738c8be50cd718a70a09463caa0793c42-runc.ddMG2Z.mount: Deactivated successfully. Jul 12 00:18:09.764132 kubelet[2459]: I0712 00:18:09.763936 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wfgr2" podStartSLOduration=22.460434791 podStartE2EDuration="28.763906296s" podCreationTimestamp="2025-07-12 00:17:41 +0000 UTC" firstStartedPulling="2025-07-12 00:18:02.741340731 +0000 UTC m=+39.631021547" lastFinishedPulling="2025-07-12 00:18:09.044812236 +0000 UTC m=+45.934493052" observedRunningTime="2025-07-12 00:18:09.763606771 +0000 UTC m=+46.653287587" watchObservedRunningTime="2025-07-12 00:18:09.763906296 +0000 UTC m=+46.653587112" Jul 12 00:18:09.764298 kubelet[2459]: I0712 00:18:09.764265 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-jmdtm" podStartSLOduration=25.426054599 podStartE2EDuration="27.764257822s" podCreationTimestamp="2025-07-12 00:17:42 +0000 UTC" firstStartedPulling="2025-07-12 00:18:05.580362001 +0000 UTC m=+42.470042817" lastFinishedPulling="2025-07-12 00:18:07.918565224 +0000 UTC m=+44.808246040" observedRunningTime="2025-07-12 00:18:08.753064924 +0000 UTC m=+45.642745740" watchObservedRunningTime="2025-07-12 00:18:09.764257822 +0000 UTC m=+46.653938638" Jul 12 00:18:14.454162 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:35100.service - OpenSSH per-connection server daemon (10.0.0.1:35100). Jul 12 00:18:14.499906 sshd[5499]: Accepted publickey for core from 10.0.0.1 port 35100 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:14.502065 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:14.510943 systemd-logind[1428]: New session 10 of user core. Jul 12 00:18:14.521602 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:18:14.727178 sshd[5499]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:14.736033 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:35100.service: Deactivated successfully. Jul 12 00:18:14.737696 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:18:14.739103 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:18:14.750718 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:35114.service - OpenSSH per-connection server daemon (10.0.0.1:35114). Jul 12 00:18:14.752179 systemd-logind[1428]: Removed session 10. Jul 12 00:18:14.784891 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 35114 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:14.786477 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:14.790617 systemd-logind[1428]: New session 11 of user core. Jul 12 00:18:14.799518 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:18:14.986705 sshd[5514]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:14.995833 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:35114.service: Deactivated successfully. Jul 12 00:18:14.998717 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:18:14.999961 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:18:15.010219 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:35124.service - OpenSSH per-connection server daemon (10.0.0.1:35124). Jul 12 00:18:15.011390 systemd-logind[1428]: Removed session 11. Jul 12 00:18:15.051420 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 35124 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:15.052738 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:15.057103 systemd-logind[1428]: New session 12 of user core. Jul 12 00:18:15.064584 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:18:15.243809 sshd[5526]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:15.249501 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:35124.service: Deactivated successfully. Jul 12 00:18:15.251787 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:18:15.253597 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:18:15.254653 systemd-logind[1428]: Removed session 12. Jul 12 00:18:20.261278 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:35138.service - OpenSSH per-connection server daemon (10.0.0.1:35138). Jul 12 00:18:20.299435 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 35138 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:20.300731 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:20.304880 systemd-logind[1428]: New session 13 of user core. Jul 12 00:18:20.315563 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:18:20.448193 sshd[5555]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:20.455162 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:35138.service: Deactivated successfully. Jul 12 00:18:20.457217 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:18:20.458683 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:18:20.470746 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:35144.service - OpenSSH per-connection server daemon (10.0.0.1:35144). Jul 12 00:18:20.472090 systemd-logind[1428]: Removed session 13. Jul 12 00:18:20.508001 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 35144 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:20.509478 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:20.514482 systemd-logind[1428]: New session 14 of user core. Jul 12 00:18:20.521560 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:18:20.754061 sshd[5569]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:20.768286 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:35144.service: Deactivated successfully. Jul 12 00:18:20.770548 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:18:20.772011 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:18:20.773775 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:35150.service - OpenSSH per-connection server daemon (10.0.0.1:35150). Jul 12 00:18:20.775221 systemd-logind[1428]: Removed session 14. Jul 12 00:18:20.838913 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 35150 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:20.840454 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:20.846444 systemd-logind[1428]: New session 15 of user core. Jul 12 00:18:20.857951 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:18:22.614967 sshd[5581]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:22.623997 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:35150.service: Deactivated successfully. Jul 12 00:18:22.627957 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:18:22.631986 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:18:22.637716 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:46860.service - OpenSSH per-connection server daemon (10.0.0.1:46860). Jul 12 00:18:22.640593 systemd-logind[1428]: Removed session 15. Jul 12 00:18:22.675929 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 46860 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:22.676797 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:22.680529 systemd-logind[1428]: New session 16 of user core. Jul 12 00:18:22.693549 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:18:23.144258 sshd[5602]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:23.160252 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:46860.service: Deactivated successfully. Jul 12 00:18:23.162723 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:18:23.164355 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:18:23.176738 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:46876.service - OpenSSH per-connection server daemon (10.0.0.1:46876). Jul 12 00:18:23.178507 systemd-logind[1428]: Removed session 16. Jul 12 00:18:23.209661 containerd[1448]: time="2025-07-12T00:18:23.209327966Z" level=info msg="StopPodSandbox for \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\"" Jul 12 00:18:23.223338 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 46876 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:23.225174 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:23.230353 systemd-logind[1428]: New session 17 of user core. Jul 12 00:18:23.239578 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.253 [WARNING][5629] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"81143345-0e90-4512-9092-036342474e19", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df", Pod:"calico-apiserver-6bf689cfcd-j9lf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89a1f8dbc38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.254 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.254 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" iface="eth0" netns="" Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.254 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.254 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.286 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.287 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.287 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.301 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.301 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.302 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.308343 containerd[1448]: 2025-07-12 00:18:23.306 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.308869 containerd[1448]: time="2025-07-12T00:18:23.308409721Z" level=info msg="TearDown network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\" successfully" Jul 12 00:18:23.308869 containerd[1448]: time="2025-07-12T00:18:23.308436682Z" level=info msg="StopPodSandbox for \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\" returns successfully" Jul 12 00:18:23.309290 containerd[1448]: time="2025-07-12T00:18:23.309217052Z" level=info msg="RemovePodSandbox for \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\"" Jul 12 00:18:23.312704 containerd[1448]: time="2025-07-12T00:18:23.312409976Z" level=info msg="Forcibly stopping sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\"" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.349 [WARNING][5665] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"81143345-0e90-4512-9092-036342474e19", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5917bcbf5d75930ca28e1b1ac67ec91903e687f78cacaaa9bb64bd90b12a17df", Pod:"calico-apiserver-6bf689cfcd-j9lf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89a1f8dbc38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.349 [INFO][5665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.349 [INFO][5665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" iface="eth0" netns="" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.349 [INFO][5665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.349 [INFO][5665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.374 [INFO][5674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.374 [INFO][5674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.374 [INFO][5674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.383 [WARNING][5674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.383 [INFO][5674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" HandleID="k8s-pod-network.5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--j9lf2-eth0" Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.385 [INFO][5674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.392144 containerd[1448]: 2025-07-12 00:18:23.390 [INFO][5665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515" Jul 12 00:18:23.395374 containerd[1448]: time="2025-07-12T00:18:23.393462765Z" level=info msg="TearDown network for sandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\" successfully" Jul 12 00:18:23.410877 sshd[5614]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:23.416067 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:18:23.416759 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:46876.service: Deactivated successfully. Jul 12 00:18:23.418961 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:18:23.420170 systemd-logind[1428]: Removed session 17. Jul 12 00:18:23.426035 containerd[1448]: time="2025-07-12T00:18:23.425970010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:23.426134 containerd[1448]: time="2025-07-12T00:18:23.426081771Z" level=info msg="RemovePodSandbox \"5c67d5dd8717a6e62c45ff9ff128d5581efb88d361faff858432eeecb7dbf515\" returns successfully" Jul 12 00:18:23.426703 containerd[1448]: time="2025-07-12T00:18:23.426663059Z" level=info msg="StopPodSandbox for \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\"" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.473 [WARNING][5693] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ca0963e-ae80-4ed4-8ecd-1417da594c22", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab", Pod:"calico-apiserver-6bf689cfcd-9drtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e8419d8bec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.473 [INFO][5693] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.473 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" iface="eth0" netns="" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.473 [INFO][5693] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.473 [INFO][5693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.493 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.493 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.494 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.502 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.502 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.503 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.507206 containerd[1448]: 2025-07-12 00:18:23.505 [INFO][5693] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.507206 containerd[1448]: time="2025-07-12T00:18:23.507153521Z" level=info msg="TearDown network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\" successfully" Jul 12 00:18:23.507206 containerd[1448]: time="2025-07-12T00:18:23.507179681Z" level=info msg="StopPodSandbox for \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\" returns successfully" Jul 12 00:18:23.507693 containerd[1448]: time="2025-07-12T00:18:23.507657568Z" level=info msg="RemovePodSandbox for \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\"" Jul 12 00:18:23.507693 containerd[1448]: time="2025-07-12T00:18:23.507686368Z" level=info msg="Forcibly stopping sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\"" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.541 [WARNING][5718] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0", GenerateName:"calico-apiserver-6bf689cfcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ca0963e-ae80-4ed4-8ecd-1417da594c22", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf689cfcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c67530a71d24ca34af681bc016f5a63793b013e669e27ae15c253ed49ddda9ab", Pod:"calico-apiserver-6bf689cfcd-9drtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e8419d8bec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.541 [INFO][5718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.541 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" iface="eth0" netns="" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.542 [INFO][5718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.542 [INFO][5718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.562 [INFO][5726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.562 [INFO][5726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.562 [INFO][5726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.570 [WARNING][5726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.571 [INFO][5726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" HandleID="k8s-pod-network.37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Workload="localhost-k8s-calico--apiserver--6bf689cfcd--9drtq-eth0" Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.572 [INFO][5726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.576198 containerd[1448]: 2025-07-12 00:18:23.574 [INFO][5718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2" Jul 12 00:18:23.576647 containerd[1448]: time="2025-07-12T00:18:23.576241946Z" level=info msg="TearDown network for sandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\" successfully" Jul 12 00:18:23.579426 containerd[1448]: time="2025-07-12T00:18:23.579368749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:23.579525 containerd[1448]: time="2025-07-12T00:18:23.579462390Z" level=info msg="RemovePodSandbox \"37f1d3dd56c564f91c614a5b73e5b34cd2f6e9985320b49b0cf02a4dfd6af6b2\" returns successfully" Jul 12 00:18:23.580203 containerd[1448]: time="2025-07-12T00:18:23.579942637Z" level=info msg="StopPodSandbox for \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\"" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.611 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c50cf70e-483e-49bb-a8b2-b017faf73702", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846", Pod:"coredns-7c65d6cfc9-knnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali208b141b9a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.611 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.611 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" iface="eth0" netns="" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.611 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.611 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.630 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.630 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.630 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.645 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.645 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.646 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.650880 containerd[1448]: 2025-07-12 00:18:23.649 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.652416 containerd[1448]: time="2025-07-12T00:18:23.652238666Z" level=info msg="TearDown network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\" successfully" Jul 12 00:18:23.652416 containerd[1448]: time="2025-07-12T00:18:23.652283146Z" level=info msg="StopPodSandbox for \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\" returns successfully" Jul 12 00:18:23.653368 containerd[1448]: time="2025-07-12T00:18:23.653340281Z" level=info msg="RemovePodSandbox for \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\"" Jul 12 00:18:23.653451 containerd[1448]: time="2025-07-12T00:18:23.653403802Z" level=info msg="Forcibly stopping sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\"" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.687 [WARNING][5771] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c50cf70e-483e-49bb-a8b2-b017faf73702", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2934499d3e37b55bbcf5e90d31ffa4278c53e3dcb8f87646760286cccc8df846", Pod:"coredns-7c65d6cfc9-knnq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali208b141b9a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.687 [INFO][5771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.687 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" iface="eth0" netns="" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.687 [INFO][5771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.687 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.706 [INFO][5779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.706 [INFO][5779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.706 [INFO][5779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.714 [WARNING][5779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.714 [INFO][5779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" HandleID="k8s-pod-network.690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Workload="localhost-k8s-coredns--7c65d6cfc9--knnq4-eth0" Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.715 [INFO][5779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.719147 containerd[1448]: 2025-07-12 00:18:23.717 [INFO][5771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518" Jul 12 00:18:23.719606 containerd[1448]: time="2025-07-12T00:18:23.719183582Z" level=info msg="TearDown network for sandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\" successfully" Jul 12 00:18:23.722263 containerd[1448]: time="2025-07-12T00:18:23.722229664Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:23.722336 containerd[1448]: time="2025-07-12T00:18:23.722318625Z" level=info msg="RemovePodSandbox \"690a982ebc1ff3b86739e8fa8060f350092f4951e262704ba46c54255824a518\" returns successfully" Jul 12 00:18:23.723034 containerd[1448]: time="2025-07-12T00:18:23.722753831Z" level=info msg="StopPodSandbox for \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\"" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.758 [WARNING][5798] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74f3a898-fc16-41f9-a59d-febcf1761d1e", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5", Pod:"coredns-7c65d6cfc9-k7xgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid06f8a479ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.758 [INFO][5798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.758 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" iface="eth0" netns="" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.758 [INFO][5798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.758 [INFO][5798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.785 [INFO][5806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.785 [INFO][5806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.785 [INFO][5806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.797 [WARNING][5806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.797 [INFO][5806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.799 [INFO][5806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.803453 containerd[1448]: 2025-07-12 00:18:23.801 [INFO][5798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.803453 containerd[1448]: time="2025-07-12T00:18:23.803405694Z" level=info msg="TearDown network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\" successfully" Jul 12 00:18:23.803453 containerd[1448]: time="2025-07-12T00:18:23.803433615Z" level=info msg="StopPodSandbox for \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\" returns successfully" Jul 12 00:18:23.804677 containerd[1448]: time="2025-07-12T00:18:23.804172865Z" level=info msg="RemovePodSandbox for \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\"" Jul 12 00:18:23.804677 containerd[1448]: time="2025-07-12T00:18:23.804204385Z" level=info msg="Forcibly stopping sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\"" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.838 [WARNING][5825] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74f3a898-fc16-41f9-a59d-febcf1761d1e", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e8023ba05d504505c1991413219e7b95560818a59ffa2ee46de4c49ee84bed5", Pod:"coredns-7c65d6cfc9-k7xgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid06f8a479ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.839 [INFO][5825] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.839 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" iface="eth0" netns="" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.839 [INFO][5825] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.839 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.858 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.859 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.859 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.868 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.868 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" HandleID="k8s-pod-network.dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Workload="localhost-k8s-coredns--7c65d6cfc9--k7xgr-eth0" Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.870 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.873703 containerd[1448]: 2025-07-12 00:18:23.871 [INFO][5825] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770" Jul 12 00:18:23.874110 containerd[1448]: time="2025-07-12T00:18:23.873739377Z" level=info msg="TearDown network for sandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\" successfully" Jul 12 00:18:23.886093 containerd[1448]: time="2025-07-12T00:18:23.885981184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:23.886186 containerd[1448]: time="2025-07-12T00:18:23.886098386Z" level=info msg="RemovePodSandbox \"dc2d5b058bce36d88ddf7712fef34a2dedaba5b33cdccb98ad25a8e054e4f770\" returns successfully" Jul 12 00:18:23.886617 containerd[1448]: time="2025-07-12T00:18:23.886585152Z" level=info msg="StopPodSandbox for \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\"" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.920 [WARNING][5853] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"26782363-597c-473e-9a7b-6c89373057d1", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d", Pod:"goldmane-58fd7646b9-jmdtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b8609c3d73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.920 [INFO][5853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.920 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" iface="eth0" netns="" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.920 [INFO][5853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.921 [INFO][5853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.940 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.940 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.940 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.949 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.949 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.951 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:23.954808 containerd[1448]: 2025-07-12 00:18:23.952 [INFO][5853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:23.954808 containerd[1448]: time="2025-07-12T00:18:23.954618363Z" level=info msg="TearDown network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\" successfully" Jul 12 00:18:23.954808 containerd[1448]: time="2025-07-12T00:18:23.954645804Z" level=info msg="StopPodSandbox for \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\" returns successfully" Jul 12 00:18:23.955251 containerd[1448]: time="2025-07-12T00:18:23.955094610Z" level=info msg="RemovePodSandbox for \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\"" Jul 12 00:18:23.955251 containerd[1448]: time="2025-07-12T00:18:23.955134370Z" level=info msg="Forcibly stopping sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\"" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:23.987 [WARNING][5880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"26782363-597c-473e-9a7b-6c89373057d1", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad876f4565c5f059832bf2f5f2f86c60012bdb66c2eddad754a6c26efe33af8d", Pod:"goldmane-58fd7646b9-jmdtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b8609c3d73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:23.988 [INFO][5880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:23.988 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" iface="eth0" netns="" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:23.988 [INFO][5880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:23.988 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:24.007 [INFO][5888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:24.007 [INFO][5888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:24.007 [INFO][5888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:24.015 [WARNING][5888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:24.015 [INFO][5888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" HandleID="k8s-pod-network.4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Workload="localhost-k8s-goldmane--58fd7646b9--jmdtm-eth0" Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:24.018 [INFO][5888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:24.022054 containerd[1448]: 2025-07-12 00:18:24.020 [INFO][5880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78" Jul 12 00:18:24.022551 containerd[1448]: time="2025-07-12T00:18:24.022080203Z" level=info msg="TearDown network for sandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\" successfully" Jul 12 00:18:24.024918 containerd[1448]: time="2025-07-12T00:18:24.024877081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:24.024980 containerd[1448]: time="2025-07-12T00:18:24.024957242Z" level=info msg="RemovePodSandbox \"4503d871cbc3af9f91946791b9b6ec272c51f4f7a086bcab877e4d109a539c78\" returns successfully" Jul 12 00:18:24.025630 containerd[1448]: time="2025-07-12T00:18:24.025594891Z" level=info msg="StopPodSandbox for \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\"" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.055 [WARNING][5907] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0", GenerateName:"calico-kube-controllers-7b5f84d77b-", Namespace:"calico-system", SelfLink:"", UID:"7dcf09fb-a512-439e-938a-bfe4c44b49b4", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5f84d77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710", Pod:"calico-kube-controllers-7b5f84d77b-mwdk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16dd4202b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.056 [INFO][5907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.056 [INFO][5907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" iface="eth0" netns="" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.056 [INFO][5907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.056 [INFO][5907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.073 [INFO][5915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.073 [INFO][5915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.073 [INFO][5915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.083 [WARNING][5915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.083 [INFO][5915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.084 [INFO][5915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:24.088149 containerd[1448]: 2025-07-12 00:18:24.086 [INFO][5907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.088569 containerd[1448]: time="2025-07-12T00:18:24.088216017Z" level=info msg="TearDown network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\" successfully" Jul 12 00:18:24.088569 containerd[1448]: time="2025-07-12T00:18:24.088243978Z" level=info msg="StopPodSandbox for \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\" returns successfully" Jul 12 00:18:24.088950 containerd[1448]: time="2025-07-12T00:18:24.088894507Z" level=info msg="RemovePodSandbox for \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\"" Jul 12 00:18:24.088950 containerd[1448]: time="2025-07-12T00:18:24.088930267Z" level=info msg="Forcibly stopping sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\"" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.123 [WARNING][5933] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0", GenerateName:"calico-kube-controllers-7b5f84d77b-", Namespace:"calico-system", SelfLink:"", UID:"7dcf09fb-a512-439e-938a-bfe4c44b49b4", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5f84d77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a889238103fdd871ec750f9b841c2eba1ff5e457b82481fe7c818c79dac6710", Pod:"calico-kube-controllers-7b5f84d77b-mwdk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16dd4202b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.123 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.123 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" iface="eth0" netns="" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.123 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.123 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.147 [INFO][5941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.147 [INFO][5941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.147 [INFO][5941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.164 [WARNING][5941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.164 [INFO][5941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" HandleID="k8s-pod-network.a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Workload="localhost-k8s-calico--kube--controllers--7b5f84d77b--mwdk4-eth0" Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.165 [INFO][5941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:24.170213 containerd[1448]: 2025-07-12 00:18:24.168 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb" Jul 12 00:18:24.170665 containerd[1448]: time="2025-07-12T00:18:24.170251287Z" level=info msg="TearDown network for sandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\" successfully" Jul 12 00:18:24.173454 containerd[1448]: time="2025-07-12T00:18:24.173409810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:24.173539 containerd[1448]: time="2025-07-12T00:18:24.173484131Z" level=info msg="RemovePodSandbox \"a202f64d1e8d68d9a8740b174d55e04a9f0016c878588139504769bb710fc9bb\" returns successfully" Jul 12 00:18:24.174286 containerd[1448]: time="2025-07-12T00:18:24.173933617Z" level=info msg="StopPodSandbox for \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\"" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.209 [WARNING][5959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wfgr2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"015b368c-3c89-4707-af85-1b98a6fb48da", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09", Pod:"csi-node-driver-wfgr2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28c6b745c4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.209 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.210 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" iface="eth0" netns="" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.210 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.210 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.230 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.230 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.230 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.240 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.240 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.242 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:24.246437 containerd[1448]: 2025-07-12 00:18:24.244 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.247306 containerd[1448]: time="2025-07-12T00:18:24.246795362Z" level=info msg="TearDown network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\" successfully" Jul 12 00:18:24.247306 containerd[1448]: time="2025-07-12T00:18:24.247206487Z" level=info msg="StopPodSandbox for \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\" returns successfully" Jul 12 00:18:24.248330 containerd[1448]: time="2025-07-12T00:18:24.248305582Z" level=info msg="RemovePodSandbox for \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\"" Jul 12 00:18:24.248548 containerd[1448]: time="2025-07-12T00:18:24.248425944Z" level=info msg="Forcibly stopping sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\"" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.285 [WARNING][5985] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wfgr2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"015b368c-3c89-4707-af85-1b98a6fb48da", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"787c711309229c2ebfe7cf5cf2fa0109abedb8b0dfe8e7684b548e5ed7e63c09", Pod:"csi-node-driver-wfgr2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28c6b745c4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.285 [INFO][5985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.285 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" iface="eth0" netns="" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.285 [INFO][5985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.285 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.306 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.306 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.306 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.315 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.315 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" HandleID="k8s-pod-network.72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Workload="localhost-k8s-csi--node--driver--wfgr2-eth0" Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.316 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:24.322238 containerd[1448]: 2025-07-12 00:18:24.319 [INFO][5985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0" Jul 12 00:18:24.323440 containerd[1448]: time="2025-07-12T00:18:24.322742949Z" level=info msg="TearDown network for sandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\" successfully" Jul 12 00:18:24.325789 containerd[1448]: time="2025-07-12T00:18:24.325748870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:24.325932 containerd[1448]: time="2025-07-12T00:18:24.325909712Z" level=info msg="RemovePodSandbox \"72fd0928d752dee2563ba8ca1ed1c798479e309dfba4a9cf7cbc0f8349f646c0\" returns successfully" Jul 12 00:18:24.326514 containerd[1448]: time="2025-07-12T00:18:24.326493080Z" level=info msg="StopPodSandbox for \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\"" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.359 [WARNING][6011] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" WorkloadEndpoint="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.359 [INFO][6011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.359 [INFO][6011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" iface="eth0" netns="" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.359 [INFO][6011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.359 [INFO][6011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.379 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.379 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.379 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.388 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.388 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.390 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:24.393603 containerd[1448]: 2025-07-12 00:18:24.391 [INFO][6011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.393955 containerd[1448]: time="2025-07-12T00:18:24.393717869Z" level=info msg="TearDown network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\" successfully" Jul 12 00:18:24.393955 containerd[1448]: time="2025-07-12T00:18:24.393747389Z" level=info msg="StopPodSandbox for \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\" returns successfully" Jul 12 00:18:24.394456 containerd[1448]: time="2025-07-12T00:18:24.394426598Z" level=info msg="RemovePodSandbox for \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\"" Jul 12 00:18:24.394490 containerd[1448]: time="2025-07-12T00:18:24.394460399Z" level=info msg="Forcibly stopping sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\"" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.430 [WARNING][6038] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" WorkloadEndpoint="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.430 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.430 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" iface="eth0" netns="" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.430 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.430 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.449 [INFO][6047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.449 [INFO][6047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.449 [INFO][6047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.458 [WARNING][6047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.458 [INFO][6047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" HandleID="k8s-pod-network.6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Workload="localhost-k8s-whisker--69966d7884--bjkn9-eth0" Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.459 [INFO][6047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:18:24.463415 containerd[1448]: 2025-07-12 00:18:24.461 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882" Jul 12 00:18:24.463752 containerd[1448]: time="2025-07-12T00:18:24.463454772Z" level=info msg="TearDown network for sandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\" successfully" Jul 12 00:18:24.466756 containerd[1448]: time="2025-07-12T00:18:24.466712056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:18:24.466824 containerd[1448]: time="2025-07-12T00:18:24.466796017Z" level=info msg="RemovePodSandbox \"6d94e2cb930ec341e255413cdc735e915f6feacb2f2b47ec0dacaa3177b32882\" returns successfully" Jul 12 00:18:28.422315 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:46884.service - OpenSSH per-connection server daemon (10.0.0.1:46884). Jul 12 00:18:28.464266 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 46884 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:28.465698 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:28.469474 systemd-logind[1428]: New session 18 of user core. Jul 12 00:18:28.478629 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:18:28.659353 sshd[6079]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:28.664317 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:46884.service: Deactivated successfully. Jul 12 00:18:28.666197 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:18:28.667060 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:18:28.668094 systemd-logind[1428]: Removed session 18. Jul 12 00:18:31.188124 kubelet[2459]: E0712 00:18:31.187957 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:33.669397 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:51194.service - OpenSSH per-connection server daemon (10.0.0.1:51194). Jul 12 00:18:33.706871 sshd[6097]: Accepted publickey for core from 10.0.0.1 port 51194 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:33.708189 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:33.712053 systemd-logind[1428]: New session 19 of user core. Jul 12 00:18:33.718593 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:18:33.854130 sshd[6097]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:33.857774 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:51194.service: Deactivated successfully. Jul 12 00:18:33.861832 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:18:33.864921 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:18:33.865790 systemd-logind[1428]: Removed session 19. Jul 12 00:18:38.188425 kubelet[2459]: E0712 00:18:38.187711 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:38.864336 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:51198.service - OpenSSH per-connection server daemon (10.0.0.1:51198). Jul 12 00:18:38.913435 sshd[6162]: Accepted publickey for core from 10.0.0.1 port 51198 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:38.915229 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:38.921106 systemd-logind[1428]: New session 20 of user core. Jul 12 00:18:38.927595 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:18:39.240116 sshd[6162]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:39.243210 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:51198.service: Deactivated successfully. Jul 12 00:18:39.247157 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:18:39.248933 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:18:39.249799 systemd-logind[1428]: Removed session 20.