Jul 12 00:07:53.918002 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:07:53.918024 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:07:53.918035 kernel: KASLR enabled Jul 12 00:07:53.918040 kernel: efi: EFI v2.7 by EDK II Jul 12 00:07:53.918046 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 12 00:07:53.918052 kernel: random: crng init done Jul 12 00:07:53.918059 kernel: ACPI: Early table checksum verification disabled Jul 12 00:07:53.918065 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 12 00:07:53.918071 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:07:53.918078 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918084 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918090 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918096 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918102 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918109 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918117 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918123 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918130 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:07:53.918136 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:07:53.918142 kernel: NUMA: Failed to initialise from firmware Jul 12 00:07:53.918149 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:07:53.918155 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 12 00:07:53.918161 kernel: Zone ranges: Jul 12 00:07:53.918168 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:07:53.918174 kernel: DMA32 empty Jul 12 00:07:53.918181 kernel: Normal empty Jul 12 00:07:53.918187 kernel: Movable zone start for each node Jul 12 00:07:53.918193 kernel: Early memory node ranges Jul 12 00:07:53.918200 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 12 00:07:53.918206 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 12 00:07:53.918212 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 12 00:07:53.918218 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:07:53.918224 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:07:53.918231 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:07:53.918237 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:07:53.918244 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:07:53.918250 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:07:53.918258 kernel: psci: probing for conduit method from ACPI. Jul 12 00:07:53.918264 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:07:53.918271 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:07:53.918299 kernel: psci: Trusted OS migration not required Jul 12 00:07:53.918306 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:07:53.918313 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:07:53.918321 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:07:53.918328 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:07:53.918335 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:07:53.918342 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:07:53.918348 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:07:53.918355 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:07:53.918361 kernel: CPU features: detected: Spectre-v4 Jul 12 00:07:53.918368 kernel: CPU features: detected: Spectre-BHB Jul 12 00:07:53.918374 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:07:53.918381 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:07:53.918389 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:07:53.918396 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:07:53.918403 kernel: alternatives: applying boot alternatives Jul 12 00:07:53.918410 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:53.918417 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:07:53.918424 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:07:53.918431 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:07:53.918438 kernel: Fallback order for Node 0: 0 Jul 12 00:07:53.918444 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:07:53.918451 kernel: Policy zone: DMA Jul 12 00:07:53.918457 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:07:53.918465 kernel: software IO TLB: area num 4. Jul 12 00:07:53.918472 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 12 00:07:53.918480 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 12 00:07:53.918486 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:07:53.918493 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:07:53.918500 kernel: rcu: RCU event tracing is enabled. Jul 12 00:07:53.918507 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:07:53.918514 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:07:53.918521 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:07:53.918528 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:07:53.918534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:07:53.918541 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:07:53.918549 kernel: GICv3: 256 SPIs implemented Jul 12 00:07:53.918556 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:07:53.918562 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:07:53.918569 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:07:53.918576 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:07:53.918582 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:07:53.918589 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:07:53.918597 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:07:53.918603 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 12 00:07:53.918610 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 12 00:07:53.918617 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:07:53.918625 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:53.918632 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:07:53.918639 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:07:53.918646 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:07:53.918652 kernel: arm-pv: using stolen time PV Jul 12 00:07:53.918660 kernel: Console: colour dummy device 80x25 Jul 12 00:07:53.918667 kernel: ACPI: Core revision 20230628 Jul 12 00:07:53.918674 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:07:53.918681 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:07:53.918688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:07:53.918696 kernel: landlock: Up and running. Jul 12 00:07:53.918703 kernel: SELinux: Initializing. Jul 12 00:07:53.918725 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:53.918732 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:53.918739 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:07:53.918746 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:07:53.918753 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:07:53.918760 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:07:53.918767 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:07:53.918775 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:07:53.918782 kernel: Remapping and enabling EFI services. Jul 12 00:07:53.918789 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:07:53.918796 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:07:53.918803 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:07:53.918810 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 12 00:07:53.918817 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:53.918824 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:07:53.918831 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:07:53.918838 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:07:53.918847 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 12 00:07:53.918854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:53.918866 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:07:53.918874 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:07:53.918881 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:07:53.918889 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 12 00:07:53.918896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:53.918903 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:07:53.918911 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:07:53.918919 kernel: SMP: Total of 4 processors activated. Jul 12 00:07:53.918934 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:07:53.918941 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:07:53.918948 kernel: CPU features: detected: Common not Private translations Jul 12 00:07:53.918956 kernel: CPU features: detected: CRC32 instructions Jul 12 00:07:53.918963 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:07:53.918971 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:07:53.918978 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:07:53.918988 kernel: CPU features: detected: Privileged Access Never Jul 12 00:07:53.918995 kernel: CPU features: detected: RAS Extension Support Jul 12 00:07:53.919002 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:07:53.919009 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:07:53.919016 kernel: alternatives: applying system-wide alternatives Jul 12 00:07:53.919023 kernel: devtmpfs: initialized Jul 12 00:07:53.919031 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:07:53.919038 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:07:53.919045 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:07:53.919054 kernel: SMBIOS 3.0.0 present. Jul 12 00:07:53.919061 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 12 00:07:53.919068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:07:53.919075 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:07:53.919083 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:07:53.919090 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:07:53.919097 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:07:53.919104 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 12 00:07:53.919112 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:07:53.919120 kernel: cpuidle: using governor menu Jul 12 00:07:53.919127 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:07:53.919134 kernel: ASID allocator initialised with 32768 entries Jul 12 00:07:53.919142 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:07:53.919149 kernel: Serial: AMBA PL011 UART driver Jul 12 00:07:53.919156 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:07:53.919163 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:07:53.919171 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:07:53.919178 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:07:53.919186 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:07:53.919194 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:07:53.919201 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:07:53.919208 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:07:53.919215 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:07:53.919222 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:07:53.919230 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:07:53.919237 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:07:53.919244 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:07:53.919253 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:07:53.919260 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:07:53.919267 kernel: ACPI: Interpreter enabled Jul 12 00:07:53.919294 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:07:53.919303 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:07:53.919311 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:07:53.919318 kernel: printk: console [ttyAMA0] enabled Jul 12 00:07:53.919325 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:07:53.919465 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:07:53.919541 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:07:53.919605 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:07:53.919668 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:07:53.919730 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:07:53.919739 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:07:53.919751 kernel: PCI host bridge to bus 0000:00 Jul 12 00:07:53.919821 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:07:53.919884 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:07:53.919957 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:07:53.920017 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:07:53.920096 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:07:53.920172 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:07:53.920240 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:07:53.920324 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:07:53.920389 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:07:53.920454 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:07:53.920518 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:07:53.920584 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:07:53.920644 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:07:53.920700 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:07:53.920760 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:07:53.920769 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:07:53.920777 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:07:53.920784 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:07:53.920791 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:07:53.920799 kernel: iommu: Default domain type: Translated Jul 12 00:07:53.920806 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:07:53.920813 kernel: efivars: Registered efivars operations Jul 12 00:07:53.920821 kernel: vgaarb: loaded Jul 12 00:07:53.920830 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:07:53.920837 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:07:53.920844 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:07:53.920851 kernel: pnp: PnP ACPI init Jul 12 00:07:53.920931 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:07:53.920942 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:07:53.920950 kernel: NET: Registered PF_INET protocol family Jul 12 00:07:53.920957 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:07:53.920966 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:07:53.920974 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:07:53.920981 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:07:53.920989 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:07:53.920996 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:07:53.921003 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:53.921011 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:53.921018 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:07:53.921025 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:07:53.921033 kernel: kvm [1]: HYP mode not available Jul 12 00:07:53.921041 kernel: Initialise system trusted keyrings Jul 12 00:07:53.921048 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:07:53.921055 kernel: Key type asymmetric registered Jul 12 00:07:53.921062 kernel: Asymmetric key parser 'x509' registered Jul 12 00:07:53.921069 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:07:53.921077 kernel: io scheduler mq-deadline registered Jul 12 00:07:53.921084 kernel: io scheduler kyber registered Jul 12 00:07:53.921091 kernel: io scheduler bfq registered Jul 12 00:07:53.921101 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:07:53.921108 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:07:53.921116 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:07:53.921185 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:07:53.921195 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:07:53.921203 kernel: thunder_xcv, ver 1.0 Jul 12 00:07:53.921210 kernel: thunder_bgx, ver 1.0 Jul 12 00:07:53.921217 kernel: nicpf, ver 1.0 Jul 12 00:07:53.921224 kernel: nicvf, ver 1.0 Jul 12 00:07:53.921309 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:07:53.921371 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:07:53 UTC (1752278873) Jul 12 00:07:53.921381 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:07:53.921389 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:07:53.921396 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:07:53.921403 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:07:53.921411 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:07:53.921418 kernel: Segment Routing with IPv6 Jul 12 00:07:53.921427 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:07:53.921435 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:07:53.921442 kernel: Key type dns_resolver registered Jul 12 00:07:53.921449 kernel: registered taskstats version 1 Jul 12 00:07:53.921456 kernel: Loading compiled-in X.509 certificates Jul 12 00:07:53.921464 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:07:53.921471 kernel: Key type .fscrypt registered Jul 12 00:07:53.921478 kernel: Key type fscrypt-provisioning registered Jul 12 00:07:53.921485 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:07:53.921494 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:07:53.921501 kernel: ima: No architecture policies found Jul 12 00:07:53.921508 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:07:53.921515 kernel: clk: Disabling unused clocks Jul 12 00:07:53.921523 kernel: Freeing unused kernel memory: 39424K Jul 12 00:07:53.921530 kernel: Run /init as init process Jul 12 00:07:53.921537 kernel: with arguments: Jul 12 00:07:53.921544 kernel: /init Jul 12 00:07:53.921551 kernel: with environment: Jul 12 00:07:53.921559 kernel: HOME=/ Jul 12 00:07:53.921566 kernel: TERM=linux Jul 12 00:07:53.921574 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:07:53.921583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:07:53.921592 systemd[1]: Detected virtualization kvm. Jul 12 00:07:53.921600 systemd[1]: Detected architecture arm64. Jul 12 00:07:53.921607 systemd[1]: Running in initrd. Jul 12 00:07:53.921616 systemd[1]: No hostname configured, using default hostname. Jul 12 00:07:53.921624 systemd[1]: Hostname set to . Jul 12 00:07:53.921632 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:07:53.921639 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:07:53.921647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:53.921655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:53.921663 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:07:53.921671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:07:53.921680 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:07:53.921688 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:07:53.921698 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:07:53.921706 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:07:53.921713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:53.921721 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:53.921729 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:53.921738 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:07:53.921746 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:07:53.921754 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:53.921762 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:53.921770 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:53.921777 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:07:53.921785 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:07:53.921793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:53.921801 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:53.921810 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:53.921818 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:53.921826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:07:53.921834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:07:53.921842 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:07:53.921850 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:07:53.921858 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:07:53.921865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:07:53.921875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:53.921883 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:53.921891 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:53.921898 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:07:53.921907 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:07:53.921916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:53.921947 systemd-journald[237]: Collecting audit messages is disabled. Jul 12 00:07:53.921968 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:53.921977 systemd-journald[237]: Journal started Jul 12 00:07:53.921997 systemd-journald[237]: Runtime Journal (/run/log/journal/e8aceacb581340fdb96c9eb304c4caa7) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:07:53.913173 systemd-modules-load[239]: Inserted module 'overlay' Jul 12 00:07:53.926371 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:07:53.926776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:07:53.930448 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:07:53.933178 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:07:53.933432 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:53.938603 kernel: Bridge firewalling registered Jul 12 00:07:53.933770 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 12 00:07:53.936645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:53.942088 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:53.944752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:07:53.949520 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:53.952191 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:53.966477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:07:53.967737 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:53.971017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:53.976622 dracut-cmdline[274]: dracut-dracut-053 Jul 12 00:07:53.979111 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:54.009610 systemd-resolved[280]: Positive Trust Anchors: Jul 12 00:07:54.009630 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:54.009662 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:54.014601 systemd-resolved[280]: Defaulting to hostname 'linux'. Jul 12 00:07:54.015625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:54.019429 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:54.053306 kernel: SCSI subsystem initialized Jul 12 00:07:54.058296 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:07:54.065300 kernel: iscsi: registered transport (tcp) Jul 12 00:07:54.079318 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:07:54.079363 kernel: QLogic iSCSI HBA Driver Jul 12 00:07:54.123919 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:54.131492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:07:54.148950 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:07:54.149019 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:07:54.150110 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:07:54.198312 kernel: raid6: neonx8 gen() 15777 MB/s Jul 12 00:07:54.215294 kernel: raid6: neonx4 gen() 15647 MB/s Jul 12 00:07:54.232306 kernel: raid6: neonx2 gen() 11922 MB/s Jul 12 00:07:54.249296 kernel: raid6: neonx1 gen() 10473 MB/s Jul 12 00:07:54.266299 kernel: raid6: int64x8 gen() 6938 MB/s Jul 12 00:07:54.283293 kernel: raid6: int64x4 gen() 7272 MB/s Jul 12 00:07:54.300311 kernel: raid6: int64x2 gen() 5847 MB/s Jul 12 00:07:54.317407 kernel: raid6: int64x1 gen() 5049 MB/s Jul 12 00:07:54.317454 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s Jul 12 00:07:54.335404 kernel: raid6: .... xor() 11868 MB/s, rmw enabled Jul 12 00:07:54.335441 kernel: raid6: using neon recovery algorithm Jul 12 00:07:54.341700 kernel: xor: measuring software checksum speed Jul 12 00:07:54.341727 kernel: 8regs : 19778 MB/sec Jul 12 00:07:54.341737 kernel: 32regs : 19617 MB/sec Jul 12 00:07:54.342313 kernel: arm64_neon : 26963 MB/sec Jul 12 00:07:54.342326 kernel: xor: using function: arm64_neon (26963 MB/sec) Jul 12 00:07:54.395486 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:07:54.410303 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:54.424502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:54.437248 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 12 00:07:54.440447 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:54.459181 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:07:54.475361 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 12 00:07:54.509167 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:54.525526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:07:54.581495 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:54.594641 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:07:54.615317 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:54.616891 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:54.619046 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:54.621487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:07:54.634482 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:07:54.646491 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:07:54.646679 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:07:54.648943 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:54.654785 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:07:54.654809 kernel: GPT:9289727 != 19775487 Jul 12 00:07:54.654818 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:07:54.654827 kernel: GPT:9289727 != 19775487 Jul 12 00:07:54.654836 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:07:54.654853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:07:54.659618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:54.659765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:54.666422 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:54.667656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:54.673136 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (512) Jul 12 00:07:54.667989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:54.672134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:54.678529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:54.683296 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (520) Jul 12 00:07:54.689849 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:07:54.694535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:07:54.696025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:54.706976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:07:54.708242 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:07:54.714088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:07:54.726470 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:07:54.728496 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:54.734179 disk-uuid[553]: Primary Header is updated. Jul 12 00:07:54.734179 disk-uuid[553]: Secondary Entries is updated. Jul 12 00:07:54.734179 disk-uuid[553]: Secondary Header is updated. Jul 12 00:07:54.741303 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:07:54.755553 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:55.752299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:07:55.755264 disk-uuid[555]: The operation has completed successfully. Jul 12 00:07:55.780941 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:07:55.781040 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:07:55.798478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:07:55.806315 sh[578]: Success Jul 12 00:07:55.821300 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:07:55.850754 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:07:55.869793 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:07:55.871815 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:07:55.882727 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:07:55.882784 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:55.883886 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:07:55.884709 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:07:55.884739 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:07:55.888712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:07:55.890106 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:07:55.900475 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:07:55.902105 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:07:55.909998 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:55.910042 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:55.910053 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:07:55.913317 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:07:55.920115 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:07:55.922313 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:55.928241 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:07:55.936518 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:07:56.005988 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:56.018504 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:56.043408 ignition[669]: Ignition 2.19.0 Jul 12 00:07:56.043419 ignition[669]: Stage: fetch-offline Jul 12 00:07:56.043455 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:56.043463 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:07:56.043671 ignition[669]: parsed url from cmdline: "" Jul 12 00:07:56.043677 ignition[669]: no config URL provided Jul 12 00:07:56.043682 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:07:56.043689 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:07:56.043712 ignition[669]: op(1): [started] loading QEMU firmware config module Jul 12 00:07:56.043717 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:07:56.050651 systemd-networkd[771]: lo: Link UP Jul 12 00:07:56.050237 ignition[669]: op(1): [finished] loading QEMU firmware config module Jul 12 00:07:56.050654 systemd-networkd[771]: lo: Gained carrier Jul 12 00:07:56.051339 systemd-networkd[771]: Enumeration completed Jul 12 00:07:56.051743 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:56.051746 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:56.052518 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:56.052620 systemd-networkd[771]: eth0: Link UP Jul 12 00:07:56.052623 systemd-networkd[771]: eth0: Gained carrier Jul 12 00:07:56.052629 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:56.053882 systemd[1]: Reached target network.target - Network. Jul 12 00:07:56.076342 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:07:56.100324 ignition[669]: parsing config with SHA512: 8634f09d5d9bb0504ae6cb3eafd5588a5609001c63d64316b43760c6e08ffa3b7a3564f00dd4fcfe143092781fe97f2f0c823a42e9a8ce6468f331c768be9f91 Jul 12 00:07:56.104381 unknown[669]: fetched base config from "system" Jul 12 00:07:56.104390 unknown[669]: fetched user config from "qemu" Jul 12 00:07:56.104954 ignition[669]: fetch-offline: fetch-offline passed Jul 12 00:07:56.106764 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:56.105036 ignition[669]: Ignition finished successfully Jul 12 00:07:56.108141 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:07:56.115551 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:07:56.126378 ignition[778]: Ignition 2.19.0 Jul 12 00:07:56.126390 ignition[778]: Stage: kargs Jul 12 00:07:56.126570 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:56.126580 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:07:56.127470 ignition[778]: kargs: kargs passed Jul 12 00:07:56.127518 ignition[778]: Ignition finished successfully Jul 12 00:07:56.131527 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:07:56.142436 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:07:56.152635 ignition[786]: Ignition 2.19.0 Jul 12 00:07:56.152647 ignition[786]: Stage: disks Jul 12 00:07:56.152827 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:56.155733 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:07:56.152837 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:07:56.156994 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:56.153759 ignition[786]: disks: disks passed Jul 12 00:07:56.158634 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:07:56.153806 ignition[786]: Ignition finished successfully Jul 12 00:07:56.160623 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:07:56.162406 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:56.163819 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:56.172426 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:07:56.183554 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:07:56.187163 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:07:56.200436 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:07:56.242291 kernel: EXT4-fs (vda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:07:56.242682 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:07:56.243941 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:07:56.251383 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:56.253130 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:07:56.254299 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:07:56.254349 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:07:56.254388 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:56.263187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (804) Jul 12 00:07:56.260927 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:07:56.262823 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:07:56.268594 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:56.268615 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:56.268626 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:07:56.271524 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:07:56.272597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:56.308976 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:07:56.313746 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:07:56.317257 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:07:56.321315 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:07:56.392764 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:56.403413 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:07:56.405782 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:07:56.411301 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:56.423771 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:07:56.428517 ignition[919]: INFO : Ignition 2.19.0 Jul 12 00:07:56.428517 ignition[919]: INFO : Stage: mount Jul 12 00:07:56.430870 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:56.430870 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:07:56.430870 ignition[919]: INFO : mount: mount passed Jul 12 00:07:56.430870 ignition[919]: INFO : Ignition finished successfully Jul 12 00:07:56.431271 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:07:56.439397 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:07:56.881431 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:07:56.890465 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:56.897282 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (931) Jul 12 00:07:56.897320 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:56.897331 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:56.898967 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:07:56.901300 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:07:56.902391 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:56.919839 ignition[948]: INFO : Ignition 2.19.0 Jul 12 00:07:56.919839 ignition[948]: INFO : Stage: files Jul 12 00:07:56.921621 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:56.921621 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:07:56.921621 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:07:56.925246 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:07:56.925246 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:07:56.925246 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:07:56.929653 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:07:56.929653 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:07:56.929653 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:07:56.929653 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:07:56.925781 unknown[948]: wrote ssh authorized keys file for user: core Jul 12 00:07:56.977369 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:07:57.076857 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:07:57.079048 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:07:57.626691 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:07:57.822594 systemd-networkd[771]: eth0: Gained IPv6LL Jul 12 00:07:58.022976 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:07:58.022976 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 00:07:58.026643 ignition[948]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:07:58.046632 ignition[948]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:07:58.050763 ignition[948]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:07:58.053336 ignition[948]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:07:58.053336 ignition[948]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:58.053336 ignition[948]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:58.053336 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:58.053336 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:58.053336 ignition[948]: INFO : files: files passed Jul 12 00:07:58.053336 ignition[948]: INFO : Ignition finished successfully Jul 12 00:07:58.053739 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:07:58.066465 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:07:58.069638 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:07:58.070969 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:07:58.071055 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:07:58.077934 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:07:58.081659 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:58.081659 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:58.084876 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:58.084033 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:58.086551 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:07:58.104508 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:07:58.134546 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:07:58.135351 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:07:58.136887 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:07:58.138657 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:07:58.140388 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:07:58.141208 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:07:58.159319 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:58.171486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:07:58.179660 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:58.180944 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:58.182981 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:07:58.184781 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:07:58.184924 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:58.187474 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:07:58.189462 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:07:58.191130 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:07:58.192871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:58.194970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:58.196953 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:07:58.198763 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:58.200704 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:07:58.202670 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:07:58.204366 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:07:58.205896 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:07:58.206042 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:58.208318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:58.210252 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:58.212157 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:07:58.215338 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:58.216578 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:07:58.216706 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:58.220136 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:07:58.220254 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:58.222265 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:07:58.223853 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:07:58.227333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:58.228598 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:07:58.230636 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:07:58.232214 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:07:58.232324 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:58.233891 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:07:58.233986 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:58.235479 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:07:58.235594 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:58.237362 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:07:58.237469 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:07:58.249500 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:07:58.250469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:07:58.250622 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:58.254747 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:07:58.256531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:07:58.257677 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:58.260529 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:07:58.262710 ignition[1002]: INFO : Ignition 2.19.0 Jul 12 00:07:58.262710 ignition[1002]: INFO : Stage: umount Jul 12 00:07:58.262710 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:58.262710 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:07:58.260648 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:58.269158 ignition[1002]: INFO : umount: umount passed Jul 12 00:07:58.269158 ignition[1002]: INFO : Ignition finished successfully Jul 12 00:07:58.265698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:07:58.265797 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:07:58.269058 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:07:58.269606 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:07:58.269732 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:07:58.272714 systemd[1]: Stopped target network.target - Network. Jul 12 00:07:58.274135 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:07:58.274220 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:07:58.276325 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:07:58.276380 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:07:58.278301 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:07:58.278357 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:07:58.280337 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:07:58.280394 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:58.282603 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:07:58.284313 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:07:58.293906 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:07:58.294048 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:07:58.294351 systemd-networkd[771]: eth0: DHCPv6 lease lost Jul 12 00:07:58.296299 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:07:58.296423 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:07:58.299156 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:07:58.299204 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:58.314401 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:07:58.315339 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:07:58.315410 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:58.317476 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:07:58.317520 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:58.319236 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:07:58.319295 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:58.321411 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:07:58.321455 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:58.323396 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:58.334454 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:07:58.334574 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:07:58.345066 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:07:58.345214 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:58.350689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:07:58.350733 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:58.352563 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:07:58.352605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:58.354369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:07:58.354425 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:58.357182 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:07:58.357230 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:58.360661 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:58.360710 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:58.369436 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:07:58.370483 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:07:58.370552 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:58.372644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:58.372690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:58.374860 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:07:58.374963 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:07:58.377599 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:07:58.378340 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:07:58.380492 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:07:58.381622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:07:58.381690 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:58.384126 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:07:58.394316 systemd[1]: Switching root. Jul 12 00:07:58.423711 systemd-journald[237]: Journal stopped Jul 12 00:07:59.214890 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 12 00:07:59.214951 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:07:59.214971 kernel: SELinux: policy capability open_perms=1 Jul 12 00:07:59.214981 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:07:59.214994 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:07:59.215006 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:07:59.215019 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:07:59.215031 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:07:59.215040 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:07:59.215050 kernel: audit: type=1403 audit(1752278878.634:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:07:59.215061 systemd[1]: Successfully loaded SELinux policy in 35.868ms. Jul 12 00:07:59.215073 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.596ms. Jul 12 00:07:59.215085 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:07:59.215096 systemd[1]: Detected virtualization kvm. Jul 12 00:07:59.215107 systemd[1]: Detected architecture arm64. Jul 12 00:07:59.215117 systemd[1]: Detected first boot. Jul 12 00:07:59.215127 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:07:59.215138 zram_generator::config[1046]: No configuration found. Jul 12 00:07:59.215149 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:07:59.215159 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:07:59.215169 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:07:59.215179 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:07:59.215191 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:07:59.215202 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:07:59.215213 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:07:59.215223 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:07:59.215234 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:07:59.215246 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:07:59.215256 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:07:59.215267 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:07:59.215290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:59.215302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:59.215312 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:07:59.215323 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:07:59.215333 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:07:59.215344 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:07:59.215354 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:07:59.215364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:59.215374 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:07:59.215386 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:07:59.215396 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:07:59.215407 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:07:59.215419 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:59.215429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:07:59.215440 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:07:59.215450 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:07:59.215460 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:07:59.215478 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:07:59.215489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:59.215500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:59.215511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:59.215521 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:07:59.215533 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:07:59.215543 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:07:59.215553 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:07:59.215563 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:07:59.215575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:07:59.215585 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:07:59.215596 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:07:59.215607 systemd[1]: Reached target machines.target - Containers. Jul 12 00:07:59.215617 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:07:59.215627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:59.215637 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:07:59.215647 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:07:59.215658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:59.215670 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:07:59.215680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:59.215691 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:07:59.215701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:59.215713 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:07:59.215723 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:07:59.215734 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:07:59.215744 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:07:59.215756 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:07:59.215766 kernel: loop: module loaded Jul 12 00:07:59.215775 kernel: fuse: init (API version 7.39) Jul 12 00:07:59.215784 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:07:59.215794 kernel: ACPI: bus type drm_connector registered Jul 12 00:07:59.215804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:07:59.215815 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:07:59.215825 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:07:59.215851 systemd-journald[1113]: Collecting audit messages is disabled. Jul 12 00:07:59.215874 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:07:59.215886 systemd-journald[1113]: Journal started Jul 12 00:07:59.215912 systemd-journald[1113]: Runtime Journal (/run/log/journal/e8aceacb581340fdb96c9eb304c4caa7) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:07:59.001145 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:07:59.016326 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:07:59.016699 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:07:59.218304 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:07:59.218356 systemd[1]: Stopped verity-setup.service. Jul 12 00:07:59.222696 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:07:59.223409 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:07:59.224666 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:07:59.225881 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:07:59.227041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:07:59.228372 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:07:59.229571 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:07:59.230792 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:07:59.232233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:59.233709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:07:59.233847 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:07:59.235255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:59.235426 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:59.236789 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:07:59.236943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:07:59.238236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:59.238383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:59.239993 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:07:59.240119 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:07:59.241498 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:59.243370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:59.244711 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:59.246127 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:07:59.248083 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:07:59.260591 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:07:59.267403 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:07:59.269611 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:07:59.270753 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:07:59.270788 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:07:59.272814 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:07:59.275160 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:07:59.277442 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:07:59.278596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:59.279961 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:07:59.281997 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:07:59.283251 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:07:59.284451 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:07:59.285566 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:07:59.291470 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:07:59.294602 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:07:59.294749 systemd-journald[1113]: Time spent on flushing to /var/log/journal/e8aceacb581340fdb96c9eb304c4caa7 is 30.700ms for 852 entries. Jul 12 00:07:59.294749 systemd-journald[1113]: System Journal (/var/log/journal/e8aceacb581340fdb96c9eb304c4caa7) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:07:59.330927 systemd-journald[1113]: Received client request to flush runtime journal. Jul 12 00:07:59.330967 kernel: loop0: detected capacity change from 0 to 114432 Jul 12 00:07:59.298487 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:07:59.301203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:59.302782 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:07:59.304109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:07:59.305762 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:07:59.307363 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:07:59.310770 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:07:59.321543 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:07:59.324536 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:07:59.330348 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:59.335583 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:07:59.337420 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:07:59.351924 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:07:59.352639 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:07:59.355626 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:07:59.364030 kernel: loop1: detected capacity change from 0 to 114328 Jul 12 00:07:59.365381 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:07:59.373579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:07:59.390866 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 12 00:07:59.390885 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 12 00:07:59.395182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:59.397306 kernel: loop2: detected capacity change from 0 to 207008 Jul 12 00:07:59.438307 kernel: loop3: detected capacity change from 0 to 114432 Jul 12 00:07:59.444328 kernel: loop4: detected capacity change from 0 to 114328 Jul 12 00:07:59.450304 kernel: loop5: detected capacity change from 0 to 207008 Jul 12 00:07:59.454083 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:07:59.454481 (sd-merge)[1182]: Merged extensions into '/usr'. Jul 12 00:07:59.458183 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:07:59.458206 systemd[1]: Reloading... Jul 12 00:07:59.520487 zram_generator::config[1211]: No configuration found. Jul 12 00:07:59.580775 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:07:59.618922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:59.656862 systemd[1]: Reloading finished in 198 ms. Jul 12 00:07:59.688455 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:07:59.689940 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:07:59.702450 systemd[1]: Starting ensure-sysext.service... Jul 12 00:07:59.704366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:59.711159 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:07:59.711178 systemd[1]: Reloading... Jul 12 00:07:59.722091 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:07:59.722399 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:07:59.723036 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:07:59.723256 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 12 00:07:59.723341 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 12 00:07:59.726503 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:59.726519 systemd-tmpfiles[1244]: Skipping /boot Jul 12 00:07:59.733924 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:59.733939 systemd-tmpfiles[1244]: Skipping /boot Jul 12 00:07:59.758441 zram_generator::config[1271]: No configuration found. Jul 12 00:07:59.841488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:59.877214 systemd[1]: Reloading finished in 165 ms. Jul 12 00:07:59.894181 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:07:59.902741 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:59.910678 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:59.913292 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:07:59.915558 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:07:59.919557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:59.924577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:59.940708 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:07:59.943402 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:07:59.951077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:59.960624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:59.964875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:59.967862 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jul 12 00:07:59.969637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:59.973746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:59.975353 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:07:59.978464 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:07:59.981501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:07:59.987018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:59.987165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:59.989015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:59.989154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:59.991091 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:59.991218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:59.992957 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:07:59.996494 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:08:00.000341 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:08:00.007742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:08:00.014456 augenrules[1336]: No rules Jul 12 00:08:00.017295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:08:00.023700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:08:00.027573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:08:00.028724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:08:00.031610 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:08:00.033213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:08:00.036497 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:08:00.038586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:08:00.038719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:08:00.041979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:08:00.042114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:08:00.044295 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1351) Jul 12 00:08:00.045564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:08:00.062341 systemd[1]: Finished ensure-sysext.service. Jul 12 00:08:00.063557 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:08:00.063715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:08:00.068249 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:08:00.081131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:08:00.091541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:08:00.095494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:08:00.097725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:08:00.099032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:08:00.102480 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:08:00.103804 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:08:00.104249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:08:00.104398 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:08:00.110312 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:08:00.113735 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:08:00.117377 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:08:00.119074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:08:00.119211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:08:00.121725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:08:00.135510 systemd-networkd[1367]: lo: Link UP Jul 12 00:08:00.135519 systemd-networkd[1367]: lo: Gained carrier Jul 12 00:08:00.139123 systemd-networkd[1367]: Enumeration completed Jul 12 00:08:00.140403 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:08:00.140885 systemd-resolved[1311]: Positive Trust Anchors: Jul 12 00:08:00.140897 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:08:00.140940 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:08:00.146659 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:08:00.146669 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:08:00.147480 systemd-networkd[1367]: eth0: Link UP Jul 12 00:08:00.147490 systemd-networkd[1367]: eth0: Gained carrier Jul 12 00:08:00.147504 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:08:00.151492 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:08:00.155629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:08:00.156948 systemd-resolved[1311]: Defaulting to hostname 'linux'. Jul 12 00:08:00.163711 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:08:00.168367 systemd-networkd[1367]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:08:00.168889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:08:00.170197 systemd[1]: Reached target network.target - Network. Jul 12 00:08:00.171294 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:08:00.193226 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:08:00.193974 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:08:00.194020 systemd-timesyncd[1386]: Initial clock synchronization to Sat 2025-07-12 00:08:00.336503 UTC. Jul 12 00:08:00.195779 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:08:00.197202 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:08:00.229555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:08:00.235227 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:08:00.237961 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:08:00.254236 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:08:00.268368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:08:00.282356 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:08:00.284332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:08:00.285480 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:08:00.286624 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:08:00.287834 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:08:00.289257 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:08:00.290380 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:08:00.291609 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:08:00.292798 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:08:00.292837 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:08:00.293710 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:08:00.295191 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:08:00.297622 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:08:00.309333 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:08:00.311748 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:08:00.313434 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:08:00.314567 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:08:00.315502 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:08:00.316427 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:08:00.316466 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:08:00.317424 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:08:00.319426 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:08:00.320342 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:08:00.322443 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:08:00.324510 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:08:00.325606 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:08:00.327250 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:08:00.331406 jq[1413]: false Jul 12 00:08:00.331565 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:08:00.333687 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:08:00.337477 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:08:00.344090 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:08:00.351561 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:08:00.351961 dbus-daemon[1412]: [system] SELinux support is enabled Jul 12 00:08:00.352060 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:08:00.354475 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:08:00.354718 extend-filesystems[1414]: Found loop3 Jul 12 00:08:00.354718 extend-filesystems[1414]: Found loop4 Jul 12 00:08:00.354718 extend-filesystems[1414]: Found loop5 Jul 12 00:08:00.354718 extend-filesystems[1414]: Found vda Jul 12 00:08:00.354718 extend-filesystems[1414]: Found vda1 Jul 12 00:08:00.354718 extend-filesystems[1414]: Found vda2 Jul 12 00:08:00.354718 extend-filesystems[1414]: Found vda3 Jul 12 00:08:00.354718 extend-filesystems[1414]: Found usr Jul 12 00:08:00.354718 extend-filesystems[1414]: Found vda4 Jul 12 00:08:00.354718 extend-filesystems[1414]: Found vda6 Jul 12 00:08:00.371451 extend-filesystems[1414]: Found vda7 Jul 12 00:08:00.371451 extend-filesystems[1414]: Found vda9 Jul 12 00:08:00.371451 extend-filesystems[1414]: Checking size of /dev/vda9 Jul 12 00:08:00.358484 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:08:00.360892 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:08:00.366412 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:08:00.377730 jq[1429]: true Jul 12 00:08:00.369696 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:08:00.370120 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:08:00.371687 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:08:00.371835 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:08:00.386073 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:08:00.386120 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:08:00.388296 jq[1439]: true Jul 12 00:08:00.389454 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:08:00.389485 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:08:00.391247 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:08:00.394680 extend-filesystems[1414]: Resized partition /dev/vda9 Jul 12 00:08:00.398802 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:08:00.400233 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:08:00.404143 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:08:00.412025 tar[1433]: linux-arm64/LICENSE Jul 12 00:08:00.412025 tar[1433]: linux-arm64/helm Jul 12 00:08:00.422884 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:08:00.422992 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) Jul 12 00:08:00.425346 update_engine[1426]: I20250712 00:08:00.425068 1426 main.cc:92] Flatcar Update Engine starting Jul 12 00:08:00.427802 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:08:00.430310 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:08:00.431508 systemd-logind[1421]: New seat seat0. Jul 12 00:08:00.431579 update_engine[1426]: I20250712 00:08:00.431529 1426 update_check_scheduler.cc:74] Next update check in 8m21s Jul 12 00:08:00.442480 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:08:00.446308 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:08:00.477913 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:08:00.499099 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:08:00.499099 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:08:00.499099 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:08:00.507582 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Jul 12 00:08:00.501470 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:08:00.501650 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:08:00.517690 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:08:00.547798 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:08:00.550320 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:08:00.552112 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:08:00.583781 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:08:00.603200 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:08:00.617600 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:08:00.623481 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:08:00.625426 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:08:00.628996 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:08:00.643456 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:08:00.654792 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:08:00.654942 containerd[1440]: time="2025-07-12T00:08:00.654822440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:08:00.657744 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:08:00.659114 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:08:00.680406 containerd[1440]: time="2025-07-12T00:08:00.680342560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:08:00.681856 containerd[1440]: time="2025-07-12T00:08:00.681784800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:08:00.681856 containerd[1440]: time="2025-07-12T00:08:00.681831720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:08:00.681856 containerd[1440]: time="2025-07-12T00:08:00.681852440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:08:00.682059 containerd[1440]: time="2025-07-12T00:08:00.682025640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:08:00.682059 containerd[1440]: time="2025-07-12T00:08:00.682051880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682120 containerd[1440]: time="2025-07-12T00:08:00.682107160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682141 containerd[1440]: time="2025-07-12T00:08:00.682119600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682318 containerd[1440]: time="2025-07-12T00:08:00.682298320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682355 containerd[1440]: time="2025-07-12T00:08:00.682317320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682355 containerd[1440]: time="2025-07-12T00:08:00.682330360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682355 containerd[1440]: time="2025-07-12T00:08:00.682339760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682423 containerd[1440]: time="2025-07-12T00:08:00.682408360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682607 containerd[1440]: time="2025-07-12T00:08:00.682589400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682706 containerd[1440]: time="2025-07-12T00:08:00.682689280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:08:00.682730 containerd[1440]: time="2025-07-12T00:08:00.682709200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:08:00.682797 containerd[1440]: time="2025-07-12T00:08:00.682783120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:08:00.682839 containerd[1440]: time="2025-07-12T00:08:00.682828040Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:08:00.691454 containerd[1440]: time="2025-07-12T00:08:00.691403760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:08:00.691552 containerd[1440]: time="2025-07-12T00:08:00.691477720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:08:00.691552 containerd[1440]: time="2025-07-12T00:08:00.691496680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:08:00.691552 containerd[1440]: time="2025-07-12T00:08:00.691512520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:08:00.691552 containerd[1440]: time="2025-07-12T00:08:00.691528320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:08:00.691747 containerd[1440]: time="2025-07-12T00:08:00.691717240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:08:00.692019 containerd[1440]: time="2025-07-12T00:08:00.691990280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:08:00.692124 containerd[1440]: time="2025-07-12T00:08:00.692102200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:08:00.692147 containerd[1440]: time="2025-07-12T00:08:00.692124080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:08:00.692147 containerd[1440]: time="2025-07-12T00:08:00.692136560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:08:00.692181 containerd[1440]: time="2025-07-12T00:08:00.692149720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692181 containerd[1440]: time="2025-07-12T00:08:00.692163680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692181 containerd[1440]: time="2025-07-12T00:08:00.692175840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692237 containerd[1440]: time="2025-07-12T00:08:00.692189440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692237 containerd[1440]: time="2025-07-12T00:08:00.692203680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692237 containerd[1440]: time="2025-07-12T00:08:00.692216040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692237 containerd[1440]: time="2025-07-12T00:08:00.692227200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692312 containerd[1440]: time="2025-07-12T00:08:00.692238080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:08:00.692312 containerd[1440]: time="2025-07-12T00:08:00.692257120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692312 containerd[1440]: time="2025-07-12T00:08:00.692300400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692371 containerd[1440]: time="2025-07-12T00:08:00.692315720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692371 containerd[1440]: time="2025-07-12T00:08:00.692328360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692371 containerd[1440]: time="2025-07-12T00:08:00.692340560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692371 containerd[1440]: time="2025-07-12T00:08:00.692352640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692371 containerd[1440]: time="2025-07-12T00:08:00.692365080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692456 containerd[1440]: time="2025-07-12T00:08:00.692377560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692456 containerd[1440]: time="2025-07-12T00:08:00.692390040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692456 containerd[1440]: time="2025-07-12T00:08:00.692404280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692456 containerd[1440]: time="2025-07-12T00:08:00.692415440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692456 containerd[1440]: time="2025-07-12T00:08:00.692427520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692456 containerd[1440]: time="2025-07-12T00:08:00.692439560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692456 containerd[1440]: time="2025-07-12T00:08:00.692454560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:08:00.692572 containerd[1440]: time="2025-07-12T00:08:00.692474920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692572 containerd[1440]: time="2025-07-12T00:08:00.692487200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692572 containerd[1440]: time="2025-07-12T00:08:00.692499280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:08:00.692621 containerd[1440]: time="2025-07-12T00:08:00.692608640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:08:00.692642 containerd[1440]: time="2025-07-12T00:08:00.692626680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:08:00.692642 containerd[1440]: time="2025-07-12T00:08:00.692637680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:08:00.692680 containerd[1440]: time="2025-07-12T00:08:00.692650160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:08:00.692680 containerd[1440]: time="2025-07-12T00:08:00.692661320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.692680 containerd[1440]: time="2025-07-12T00:08:00.692675320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:08:00.692729 containerd[1440]: time="2025-07-12T00:08:00.692685480Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:08:00.692729 containerd[1440]: time="2025-07-12T00:08:00.692697160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:08:00.694231 containerd[1440]: time="2025-07-12T00:08:00.694136760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:08:00.694231 containerd[1440]: time="2025-07-12T00:08:00.694215280Z" level=info msg="Connect containerd service" Jul 12 00:08:00.694393 containerd[1440]: time="2025-07-12T00:08:00.694251840Z" level=info msg="using legacy CRI server" Jul 12 00:08:00.694393 containerd[1440]: time="2025-07-12T00:08:00.694259920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:08:00.694393 containerd[1440]: time="2025-07-12T00:08:00.694372760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:08:00.695084 containerd[1440]: time="2025-07-12T00:08:00.695039600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:08:00.695317 containerd[1440]: time="2025-07-12T00:08:00.695256400Z" level=info msg="Start subscribing containerd event" Jul 12 00:08:00.695351 containerd[1440]: time="2025-07-12T00:08:00.695335880Z" level=info msg="Start recovering state" Jul 12 00:08:00.695431 containerd[1440]: time="2025-07-12T00:08:00.695409560Z" level=info msg="Start event monitor" Jul 12 00:08:00.695431 containerd[1440]: time="2025-07-12T00:08:00.695427000Z" level=info msg="Start snapshots syncer" Jul 12 00:08:00.695492 containerd[1440]: time="2025-07-12T00:08:00.695435760Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:08:00.695492 containerd[1440]: time="2025-07-12T00:08:00.695443120Z" level=info msg="Start streaming server" Jul 12 00:08:00.695592 containerd[1440]: time="2025-07-12T00:08:00.695530320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:08:00.695592 containerd[1440]: time="2025-07-12T00:08:00.695580240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:08:00.695652 containerd[1440]: time="2025-07-12T00:08:00.695628280Z" level=info msg="containerd successfully booted in 0.041723s" Jul 12 00:08:00.698397 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:08:00.830950 tar[1433]: linux-arm64/README.md Jul 12 00:08:00.844320 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:08:01.540152 systemd-networkd[1367]: eth0: Gained IPv6LL Jul 12 00:08:01.542883 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:08:01.544678 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:08:01.557591 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:08:01.562124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:01.564406 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:08:01.580904 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:08:01.581278 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:08:01.583402 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:08:01.585527 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:08:02.142387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:02.143941 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:08:02.146234 systemd[1]: Startup finished in 620ms (kernel) + 4.919s (initrd) + 3.551s (userspace) = 9.090s. Jul 12 00:08:02.146278 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:02.665218 kubelet[1524]: E0712 00:08:02.665162 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:02.667567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:02.667714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:06.488028 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:08:06.489164 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:42316.service - OpenSSH per-connection server daemon (10.0.0.1:42316). Jul 12 00:08:06.548715 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 42316 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:08:06.550654 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:06.559184 systemd-logind[1421]: New session 1 of user core. Jul 12 00:08:06.560167 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:08:06.569568 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:08:06.579318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:08:06.581534 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:08:06.591945 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:08:06.690446 systemd[1541]: Queued start job for default target default.target. Jul 12 00:08:06.701350 systemd[1541]: Created slice app.slice - User Application Slice. Jul 12 00:08:06.701385 systemd[1541]: Reached target paths.target - Paths. Jul 12 00:08:06.701398 systemd[1541]: Reached target timers.target - Timers. Jul 12 00:08:06.702750 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:08:06.713742 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:08:06.713878 systemd[1541]: Reached target sockets.target - Sockets. Jul 12 00:08:06.713896 systemd[1541]: Reached target basic.target - Basic System. Jul 12 00:08:06.713939 systemd[1541]: Reached target default.target - Main User Target. Jul 12 00:08:06.713969 systemd[1541]: Startup finished in 116ms. Jul 12 00:08:06.714159 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:08:06.715671 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:08:06.777641 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:42324.service - OpenSSH per-connection server daemon (10.0.0.1:42324). Jul 12 00:08:06.814916 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 42324 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:08:06.816485 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:06.821682 systemd-logind[1421]: New session 2 of user core. Jul 12 00:08:06.832467 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:08:06.885996 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:06.894580 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:42324.service: Deactivated successfully. Jul 12 00:08:06.895890 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:08:06.897180 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:08:06.910871 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:42340.service - OpenSSH per-connection server daemon (10.0.0.1:42340). Jul 12 00:08:06.912126 systemd-logind[1421]: Removed session 2. Jul 12 00:08:06.941328 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 42340 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:08:06.942694 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:06.947480 systemd-logind[1421]: New session 3 of user core. Jul 12 00:08:06.954469 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:08:07.004256 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:07.016624 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:42340.service: Deactivated successfully. Jul 12 00:08:07.018002 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:08:07.020344 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:08:07.021505 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:42356.service - OpenSSH per-connection server daemon (10.0.0.1:42356). Jul 12 00:08:07.022141 systemd-logind[1421]: Removed session 3. Jul 12 00:08:07.055669 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 42356 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:08:07.056969 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:07.060478 systemd-logind[1421]: New session 4 of user core. Jul 12 00:08:07.072456 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:08:07.124579 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:07.141777 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:42356.service: Deactivated successfully. Jul 12 00:08:07.143257 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:08:07.145307 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:08:07.146749 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:42364.service - OpenSSH per-connection server daemon (10.0.0.1:42364). Jul 12 00:08:07.147447 systemd-logind[1421]: Removed session 4. Jul 12 00:08:07.186761 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 42364 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:08:07.188691 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:07.192602 systemd-logind[1421]: New session 5 of user core. Jul 12 00:08:07.201435 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:08:07.271979 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:08:07.272252 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:07.291045 sudo[1576]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:07.293598 sshd[1573]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:07.301669 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:42364.service: Deactivated successfully. Jul 12 00:08:07.303999 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:08:07.306451 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:08:07.318110 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:42370.service - OpenSSH per-connection server daemon (10.0.0.1:42370). Jul 12 00:08:07.318973 systemd-logind[1421]: Removed session 5. Jul 12 00:08:07.349174 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 42370 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:08:07.350396 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:07.353888 systemd-logind[1421]: New session 6 of user core. Jul 12 00:08:07.364474 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:08:07.415637 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:08:07.415949 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:07.419130 sudo[1585]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:07.423877 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:08:07.424137 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:07.444591 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:08:07.445808 auditctl[1588]: No rules Jul 12 00:08:07.446699 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:08:07.446915 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:08:07.450586 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:08:07.471381 augenrules[1606]: No rules Jul 12 00:08:07.473373 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:08:07.474573 sudo[1584]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:07.476232 sshd[1581]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:07.486687 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:42370.service: Deactivated successfully. Jul 12 00:08:07.488097 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:08:07.489314 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:08:07.499589 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:42386.service - OpenSSH per-connection server daemon (10.0.0.1:42386). Jul 12 00:08:07.500761 systemd-logind[1421]: Removed session 6. Jul 12 00:08:07.531261 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 42386 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:08:07.532591 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:07.536588 systemd-logind[1421]: New session 7 of user core. Jul 12 00:08:07.548478 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:08:07.599432 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:08:07.599830 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:07.925565 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:08:07.925666 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:08:08.194788 dockerd[1635]: time="2025-07-12T00:08:08.194649816Z" level=info msg="Starting up" Jul 12 00:08:08.365100 dockerd[1635]: time="2025-07-12T00:08:08.364897035Z" level=info msg="Loading containers: start." Jul 12 00:08:08.562318 kernel: Initializing XFRM netlink socket Jul 12 00:08:08.629412 systemd-networkd[1367]: docker0: Link UP Jul 12 00:08:08.652670 dockerd[1635]: time="2025-07-12T00:08:08.652603928Z" level=info msg="Loading containers: done." Jul 12 00:08:08.665794 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck172384503-merged.mount: Deactivated successfully. Jul 12 00:08:08.667419 dockerd[1635]: time="2025-07-12T00:08:08.667371877Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:08:08.667515 dockerd[1635]: time="2025-07-12T00:08:08.667496342Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:08:08.667627 dockerd[1635]: time="2025-07-12T00:08:08.667603326Z" level=info msg="Daemon has completed initialization" Jul 12 00:08:08.697027 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:08:08.697371 dockerd[1635]: time="2025-07-12T00:08:08.696793459Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:08:09.288018 containerd[1440]: time="2025-07-12T00:08:09.287964274Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:08:09.905254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713394241.mount: Deactivated successfully. Jul 12 00:08:10.708882 containerd[1440]: time="2025-07-12T00:08:10.708814459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.709406 containerd[1440]: time="2025-07-12T00:08:10.709371911Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 12 00:08:10.709986 containerd[1440]: time="2025-07-12T00:08:10.709949993Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.713092 containerd[1440]: time="2025-07-12T00:08:10.713022735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.714438 containerd[1440]: time="2025-07-12T00:08:10.714407438Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.426397657s" Jul 12 00:08:10.714483 containerd[1440]: time="2025-07-12T00:08:10.714448859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:08:10.715108 containerd[1440]: time="2025-07-12T00:08:10.715040895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:08:11.639783 containerd[1440]: time="2025-07-12T00:08:11.639733398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:11.640529 containerd[1440]: time="2025-07-12T00:08:11.640298275Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 12 00:08:11.644443 containerd[1440]: time="2025-07-12T00:08:11.644396807Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:11.651020 containerd[1440]: time="2025-07-12T00:08:11.650961492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:11.651736 containerd[1440]: time="2025-07-12T00:08:11.651700059Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 936.626677ms" Jul 12 00:08:11.651800 containerd[1440]: time="2025-07-12T00:08:11.651736831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:08:11.652803 containerd[1440]: time="2025-07-12T00:08:11.652610429Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:08:12.702246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:08:12.711527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:12.718312 containerd[1440]: time="2025-07-12T00:08:12.717903877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:12.719091 containerd[1440]: time="2025-07-12T00:08:12.719013331Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 12 00:08:12.719941 containerd[1440]: time="2025-07-12T00:08:12.719765646Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:12.723372 containerd[1440]: time="2025-07-12T00:08:12.723257959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:12.724950 containerd[1440]: time="2025-07-12T00:08:12.724530158Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.071882923s" Jul 12 00:08:12.724950 containerd[1440]: time="2025-07-12T00:08:12.724568354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:08:12.725243 containerd[1440]: time="2025-07-12T00:08:12.725213591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:08:12.816409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:12.821325 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:12.907521 kubelet[1856]: E0712 00:08:12.907466 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:12.910731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:12.910874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:13.725364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860142840.mount: Deactivated successfully. Jul 12 00:08:14.170718 containerd[1440]: time="2025-07-12T00:08:14.170550070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:14.174913 containerd[1440]: time="2025-07-12T00:08:14.174850780Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 12 00:08:14.175962 containerd[1440]: time="2025-07-12T00:08:14.175920490Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:14.182685 containerd[1440]: time="2025-07-12T00:08:14.182621077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:14.183374 containerd[1440]: time="2025-07-12T00:08:14.183332545Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.458079081s" Jul 12 00:08:14.183432 containerd[1440]: time="2025-07-12T00:08:14.183381820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:08:14.184051 containerd[1440]: time="2025-07-12T00:08:14.184011753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:08:14.809247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671944792.mount: Deactivated successfully. Jul 12 00:08:15.483460 containerd[1440]: time="2025-07-12T00:08:15.483397910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.484712 containerd[1440]: time="2025-07-12T00:08:15.484681951Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 00:08:15.486150 containerd[1440]: time="2025-07-12T00:08:15.486104571Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.490306 containerd[1440]: time="2025-07-12T00:08:15.490262932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.491486 containerd[1440]: time="2025-07-12T00:08:15.491452674Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.307406978s" Jul 12 00:08:15.491530 containerd[1440]: time="2025-07-12T00:08:15.491488813Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:08:15.492034 containerd[1440]: time="2025-07-12T00:08:15.491941775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:08:15.911946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142392131.mount: Deactivated successfully. Jul 12 00:08:15.916005 containerd[1440]: time="2025-07-12T00:08:15.915962729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.916651 containerd[1440]: time="2025-07-12T00:08:15.916455480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:08:15.917350 containerd[1440]: time="2025-07-12T00:08:15.917290369Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.922501 containerd[1440]: time="2025-07-12T00:08:15.922462269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.923468 containerd[1440]: time="2025-07-12T00:08:15.923434093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 431.458786ms" Jul 12 00:08:15.923468 containerd[1440]: time="2025-07-12T00:08:15.923465178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:08:15.924031 containerd[1440]: time="2025-07-12T00:08:15.923998922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:08:16.422922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527075128.mount: Deactivated successfully. Jul 12 00:08:17.736166 containerd[1440]: time="2025-07-12T00:08:17.736112583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.737601 containerd[1440]: time="2025-07-12T00:08:17.737562749Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 12 00:08:17.738362 containerd[1440]: time="2025-07-12T00:08:17.738305428Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.742676 containerd[1440]: time="2025-07-12T00:08:17.742642177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.744909 containerd[1440]: time="2025-07-12T00:08:17.744855306Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.820825586s" Jul 12 00:08:17.744954 containerd[1440]: time="2025-07-12T00:08:17.744909540Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:08:22.391524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:22.405531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:22.429743 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit session-7.scope)... Jul 12 00:08:22.429764 systemd[1]: Reloading... Jul 12 00:08:22.499308 zram_generator::config[2058]: No configuration found. Jul 12 00:08:22.717762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:22.772126 systemd[1]: Reloading finished in 341 ms. Jul 12 00:08:22.812797 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:22.815668 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:08:22.815867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:22.817416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:22.915729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:22.919538 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:08:22.953173 kubelet[2099]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:22.953173 kubelet[2099]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:08:22.953173 kubelet[2099]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:22.953595 kubelet[2099]: I0712 00:08:22.953219 2099 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:08:24.028664 kubelet[2099]: I0712 00:08:24.028607 2099 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:08:24.028664 kubelet[2099]: I0712 00:08:24.028648 2099 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:08:24.029017 kubelet[2099]: I0712 00:08:24.028934 2099 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:08:24.052361 kubelet[2099]: E0712 00:08:24.052311 2099 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:24.055048 kubelet[2099]: I0712 00:08:24.054797 2099 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:08:24.066702 kubelet[2099]: E0712 00:08:24.066659 2099 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:08:24.066702 kubelet[2099]: I0712 00:08:24.066697 2099 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:08:24.072715 kubelet[2099]: I0712 00:08:24.072685 2099 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:08:24.073309 kubelet[2099]: I0712 00:08:24.073015 2099 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:08:24.073309 kubelet[2099]: I0712 00:08:24.073052 2099 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:08:24.073488 kubelet[2099]: I0712 00:08:24.073366 2099 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:08:24.073488 kubelet[2099]: I0712 00:08:24.073376 2099 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:08:24.073676 kubelet[2099]: I0712 00:08:24.073647 2099 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:24.079931 kubelet[2099]: I0712 00:08:24.079907 2099 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:08:24.079985 kubelet[2099]: I0712 00:08:24.079938 2099 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:08:24.079985 kubelet[2099]: I0712 00:08:24.079958 2099 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:08:24.079985 kubelet[2099]: I0712 00:08:24.079968 2099 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:08:24.083034 kubelet[2099]: I0712 00:08:24.082779 2099 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:08:24.083887 kubelet[2099]: I0712 00:08:24.083600 2099 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:08:24.083887 kubelet[2099]: W0712 00:08:24.083723 2099 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:08:24.083887 kubelet[2099]: W0712 00:08:24.083754 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:24.083887 kubelet[2099]: E0712 00:08:24.083814 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:24.085510 kubelet[2099]: I0712 00:08:24.084617 2099 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:08:24.085510 kubelet[2099]: W0712 00:08:24.084616 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:24.085510 kubelet[2099]: I0712 00:08:24.084650 2099 server.go:1287] "Started kubelet" Jul 12 00:08:24.085510 kubelet[2099]: E0712 00:08:24.084663 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:24.085510 kubelet[2099]: I0712 00:08:24.084704 2099 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:08:24.085933 kubelet[2099]: I0712 00:08:24.085907 2099 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:08:24.086487 kubelet[2099]: I0712 00:08:24.086460 2099 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:08:24.090225 kubelet[2099]: I0712 00:08:24.090200 2099 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:08:24.090574 kubelet[2099]: I0712 00:08:24.090561 2099 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:08:24.092007 kubelet[2099]: I0712 00:08:24.091983 2099 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:08:24.092157 kubelet[2099]: I0712 00:08:24.092145 2099 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:08:24.093021 kubelet[2099]: E0712 00:08:24.093005 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:24.093744 kubelet[2099]: W0712 00:08:24.093695 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:24.093807 kubelet[2099]: E0712 00:08:24.093750 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:24.093854 kubelet[2099]: E0712 00:08:24.093821 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Jul 12 00:08:24.094140 kubelet[2099]: I0712 00:08:24.094110 2099 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:08:24.094303 kubelet[2099]: I0712 00:08:24.094262 2099 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:08:24.096754 kubelet[2099]: I0712 00:08:24.096706 2099 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:08:24.096754 kubelet[2099]: E0712 00:08:24.095854 2099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18515861196014f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:08:24.084632825 +0000 UTC m=+1.161952676,LastTimestamp:2025-07-12 00:08:24.084632825 +0000 UTC m=+1.161952676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:08:24.098821 kubelet[2099]: I0712 00:08:24.098757 2099 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:08:24.099021 kubelet[2099]: I0712 00:08:24.099004 2099 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:08:24.107877 kubelet[2099]: I0712 00:08:24.107731 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:08:24.109246 kubelet[2099]: I0712 00:08:24.109025 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:08:24.109246 kubelet[2099]: I0712 00:08:24.109048 2099 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:08:24.109246 kubelet[2099]: I0712 00:08:24.109064 2099 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:08:24.109246 kubelet[2099]: I0712 00:08:24.109071 2099 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:08:24.109246 kubelet[2099]: E0712 00:08:24.109106 2099 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:08:24.110136 kubelet[2099]: W0712 00:08:24.110087 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:24.110183 kubelet[2099]: E0712 00:08:24.110135 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:24.110728 kubelet[2099]: I0712 00:08:24.110705 2099 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:08:24.110805 kubelet[2099]: I0712 00:08:24.110794 2099 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:08:24.110859 kubelet[2099]: I0712 00:08:24.110851 2099 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:24.182951 kubelet[2099]: I0712 00:08:24.182920 2099 policy_none.go:49] "None policy: Start" Jul 12 00:08:24.183099 kubelet[2099]: I0712 00:08:24.183089 2099 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:08:24.183158 kubelet[2099]: I0712 00:08:24.183149 2099 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:08:24.187956 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:08:24.193730 kubelet[2099]: E0712 00:08:24.193702 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:24.200856 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:08:24.209661 kubelet[2099]: E0712 00:08:24.209625 2099 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:08:24.215718 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:08:24.216985 kubelet[2099]: I0712 00:08:24.216950 2099 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:08:24.217377 kubelet[2099]: I0712 00:08:24.217167 2099 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:08:24.217377 kubelet[2099]: I0712 00:08:24.217190 2099 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:08:24.217459 kubelet[2099]: I0712 00:08:24.217434 2099 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:08:24.218157 kubelet[2099]: E0712 00:08:24.218070 2099 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:08:24.218157 kubelet[2099]: E0712 00:08:24.218117 2099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:08:24.295439 kubelet[2099]: E0712 00:08:24.295323 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Jul 12 00:08:24.318541 kubelet[2099]: I0712 00:08:24.318497 2099 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:08:24.318944 kubelet[2099]: E0712 00:08:24.318922 2099 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 12 00:08:24.417562 systemd[1]: Created slice kubepods-burstable-pod1ec635411b676186908546c53442b4bb.slice - libcontainer container kubepods-burstable-pod1ec635411b676186908546c53442b4bb.slice. Jul 12 00:08:24.426168 kubelet[2099]: E0712 00:08:24.426072 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:24.429266 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 12 00:08:24.445735 kubelet[2099]: E0712 00:08:24.445662 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:24.448172 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 12 00:08:24.449969 kubelet[2099]: E0712 00:08:24.449948 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:24.494260 kubelet[2099]: I0712 00:08:24.494225 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ec635411b676186908546c53442b4bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec635411b676186908546c53442b4bb\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:24.494260 kubelet[2099]: I0712 00:08:24.494266 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:24.494422 kubelet[2099]: I0712 00:08:24.494301 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:24.494422 kubelet[2099]: I0712 00:08:24.494319 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ec635411b676186908546c53442b4bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec635411b676186908546c53442b4bb\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:24.494422 kubelet[2099]: I0712 00:08:24.494336 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ec635411b676186908546c53442b4bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ec635411b676186908546c53442b4bb\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:24.494422 kubelet[2099]: I0712 00:08:24.494351 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:24.494422 kubelet[2099]: I0712 00:08:24.494367 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:24.494522 kubelet[2099]: I0712 00:08:24.494382 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:24.494522 kubelet[2099]: I0712 00:08:24.494396 2099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:08:24.520392 kubelet[2099]: I0712 00:08:24.520340 2099 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:08:24.520686 kubelet[2099]: E0712 00:08:24.520659 2099 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 12 00:08:24.696709 kubelet[2099]: E0712 00:08:24.696591 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Jul 12 00:08:24.726889 kubelet[2099]: E0712 00:08:24.726824 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:24.727536 containerd[1440]: time="2025-07-12T00:08:24.727499115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ec635411b676186908546c53442b4bb,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:24.746812 kubelet[2099]: E0712 00:08:24.746777 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:24.747267 containerd[1440]: time="2025-07-12T00:08:24.747233614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:24.750712 kubelet[2099]: E0712 00:08:24.750495 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:24.750957 containerd[1440]: time="2025-07-12T00:08:24.750923301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:24.922028 kubelet[2099]: I0712 00:08:24.921983 2099 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:08:24.922395 kubelet[2099]: E0712 00:08:24.922364 2099 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 12 00:08:25.062435 kubelet[2099]: W0712 00:08:25.062390 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:25.062435 kubelet[2099]: E0712 00:08:25.062437 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:25.192529 kubelet[2099]: W0712 00:08:25.192469 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:25.192529 kubelet[2099]: E0712 00:08:25.192533 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:25.199052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072238567.mount: Deactivated successfully. Jul 12 00:08:25.204402 containerd[1440]: time="2025-07-12T00:08:25.204013012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:25.205199 containerd[1440]: time="2025-07-12T00:08:25.205155598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 12 00:08:25.205752 containerd[1440]: time="2025-07-12T00:08:25.205700391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:25.206547 containerd[1440]: time="2025-07-12T00:08:25.206521425Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:25.207407 containerd[1440]: time="2025-07-12T00:08:25.207352625Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:25.208065 containerd[1440]: time="2025-07-12T00:08:25.208014304Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:08:25.208848 containerd[1440]: time="2025-07-12T00:08:25.208814842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:08:25.210977 containerd[1440]: time="2025-07-12T00:08:25.210939137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:25.211954 containerd[1440]: time="2025-07-12T00:08:25.211920407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 484.339585ms" Jul 12 00:08:25.213394 containerd[1440]: time="2025-07-12T00:08:25.213159542Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.810434ms" Jul 12 00:08:25.216501 containerd[1440]: time="2025-07-12T00:08:25.216459407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.463487ms" Jul 12 00:08:25.346409 containerd[1440]: time="2025-07-12T00:08:25.345500107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:25.346409 containerd[1440]: time="2025-07-12T00:08:25.346245966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:25.346409 containerd[1440]: time="2025-07-12T00:08:25.346260017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346003631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346977135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346990745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346438746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346490943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346505954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346606507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:25.347078 containerd[1440]: time="2025-07-12T00:08:25.346384466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:25.347382 containerd[1440]: time="2025-07-12T00:08:25.347071563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:25.377473 systemd[1]: Started cri-containerd-6a76a2ca705cee874e1e89e046ee1d5aad8a5e139ed7f57c2c18043b6497cf0b.scope - libcontainer container 6a76a2ca705cee874e1e89e046ee1d5aad8a5e139ed7f57c2c18043b6497cf0b. Jul 12 00:08:25.377612 kubelet[2099]: W0712 00:08:25.377488 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:25.377612 kubelet[2099]: E0712 00:08:25.377525 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:25.378924 systemd[1]: Started cri-containerd-94167e96acb7c28c2183673e3a1a2d9e25af00ee00295af4bb00f180a6e1b857.scope - libcontainer container 94167e96acb7c28c2183673e3a1a2d9e25af00ee00295af4bb00f180a6e1b857. Jul 12 00:08:25.380061 systemd[1]: Started cri-containerd-f81782e41c83dd5fe461db97dd6d0429d7a3aa8ad3d9094051d5f31bef628ee2.scope - libcontainer container f81782e41c83dd5fe461db97dd6d0429d7a3aa8ad3d9094051d5f31bef628ee2. Jul 12 00:08:25.410815 containerd[1440]: time="2025-07-12T00:08:25.410768198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ec635411b676186908546c53442b4bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a76a2ca705cee874e1e89e046ee1d5aad8a5e139ed7f57c2c18043b6497cf0b\"" Jul 12 00:08:25.414874 kubelet[2099]: E0712 00:08:25.414839 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:25.417950 containerd[1440]: time="2025-07-12T00:08:25.417810888Z" level=info msg="CreateContainer within sandbox \"6a76a2ca705cee874e1e89e046ee1d5aad8a5e139ed7f57c2c18043b6497cf0b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:08:25.417950 containerd[1440]: time="2025-07-12T00:08:25.417829701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"94167e96acb7c28c2183673e3a1a2d9e25af00ee00295af4bb00f180a6e1b857\"" Jul 12 00:08:25.418549 kubelet[2099]: E0712 00:08:25.418512 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:25.420223 containerd[1440]: time="2025-07-12T00:08:25.420190007Z" level=info msg="CreateContainer within sandbox \"94167e96acb7c28c2183673e3a1a2d9e25af00ee00295af4bb00f180a6e1b857\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:08:25.426096 containerd[1440]: time="2025-07-12T00:08:25.426058088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f81782e41c83dd5fe461db97dd6d0429d7a3aa8ad3d9094051d5f31bef628ee2\"" Jul 12 00:08:25.426731 kubelet[2099]: E0712 00:08:25.426711 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:25.428097 containerd[1440]: time="2025-07-12T00:08:25.428068461Z" level=info msg="CreateContainer within sandbox \"f81782e41c83dd5fe461db97dd6d0429d7a3aa8ad3d9094051d5f31bef628ee2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:08:25.431311 containerd[1440]: time="2025-07-12T00:08:25.431261089Z" level=info msg="CreateContainer within sandbox \"6a76a2ca705cee874e1e89e046ee1d5aad8a5e139ed7f57c2c18043b6497cf0b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e3e2d1984a94b5bb71ec23c803d49602b6f16202dacafbbfd64665c35d9dfc02\"" Jul 12 00:08:25.431901 containerd[1440]: time="2025-07-12T00:08:25.431833902Z" level=info msg="StartContainer for \"e3e2d1984a94b5bb71ec23c803d49602b6f16202dacafbbfd64665c35d9dfc02\"" Jul 12 00:08:25.435732 containerd[1440]: time="2025-07-12T00:08:25.435694893Z" level=info msg="CreateContainer within sandbox \"94167e96acb7c28c2183673e3a1a2d9e25af00ee00295af4bb00f180a6e1b857\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b959474ee5838a63fbb6b247ad0cd6d0274ff5d890a1a62cce16d0a6bbd5d8ed\"" Jul 12 00:08:25.436226 containerd[1440]: time="2025-07-12T00:08:25.436201379Z" level=info msg="StartContainer for \"b959474ee5838a63fbb6b247ad0cd6d0274ff5d890a1a62cce16d0a6bbd5d8ed\"" Jul 12 00:08:25.444524 containerd[1440]: time="2025-07-12T00:08:25.444459467Z" level=info msg="CreateContainer within sandbox \"f81782e41c83dd5fe461db97dd6d0429d7a3aa8ad3d9094051d5f31bef628ee2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"844e24ff0b277e3d51d3650c7873a0366b1514f92bed111c9b536bb76abdc353\"" Jul 12 00:08:25.444955 containerd[1440]: time="2025-07-12T00:08:25.444928967Z" level=info msg="StartContainer for \"844e24ff0b277e3d51d3650c7873a0366b1514f92bed111c9b536bb76abdc353\"" Jul 12 00:08:25.459449 systemd[1]: Started cri-containerd-e3e2d1984a94b5bb71ec23c803d49602b6f16202dacafbbfd64665c35d9dfc02.scope - libcontainer container e3e2d1984a94b5bb71ec23c803d49602b6f16202dacafbbfd64665c35d9dfc02. Jul 12 00:08:25.463172 systemd[1]: Started cri-containerd-b959474ee5838a63fbb6b247ad0cd6d0274ff5d890a1a62cce16d0a6bbd5d8ed.scope - libcontainer container b959474ee5838a63fbb6b247ad0cd6d0274ff5d890a1a62cce16d0a6bbd5d8ed. Jul 12 00:08:25.474461 systemd[1]: Started cri-containerd-844e24ff0b277e3d51d3650c7873a0366b1514f92bed111c9b536bb76abdc353.scope - libcontainer container 844e24ff0b277e3d51d3650c7873a0366b1514f92bed111c9b536bb76abdc353. Jul 12 00:08:25.490466 kubelet[2099]: W0712 00:08:25.490406 2099 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 12 00:08:25.490623 kubelet[2099]: E0712 00:08:25.490476 2099 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:08:25.497318 kubelet[2099]: E0712 00:08:25.497112 2099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Jul 12 00:08:25.501613 containerd[1440]: time="2025-07-12T00:08:25.501570422Z" level=info msg="StartContainer for \"e3e2d1984a94b5bb71ec23c803d49602b6f16202dacafbbfd64665c35d9dfc02\" returns successfully" Jul 12 00:08:25.518608 containerd[1440]: time="2025-07-12T00:08:25.518558540Z" level=info msg="StartContainer for \"b959474ee5838a63fbb6b247ad0cd6d0274ff5d890a1a62cce16d0a6bbd5d8ed\" returns successfully" Jul 12 00:08:25.560934 containerd[1440]: time="2025-07-12T00:08:25.559328125Z" level=info msg="StartContainer for \"844e24ff0b277e3d51d3650c7873a0366b1514f92bed111c9b536bb76abdc353\" returns successfully" Jul 12 00:08:25.724683 kubelet[2099]: I0712 00:08:25.724350 2099 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:08:26.116391 kubelet[2099]: E0712 00:08:26.116293 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:26.117353 kubelet[2099]: E0712 00:08:26.117324 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:26.120813 kubelet[2099]: E0712 00:08:26.120645 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:26.120813 kubelet[2099]: E0712 00:08:26.120760 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:26.124399 kubelet[2099]: E0712 00:08:26.124379 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:26.124511 kubelet[2099]: E0712 00:08:26.124494 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:27.086419 kubelet[2099]: I0712 00:08:27.086378 2099 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:08:27.086419 kubelet[2099]: E0712 00:08:27.086418 2099 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:08:27.097012 kubelet[2099]: E0712 00:08:27.096978 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:27.128309 kubelet[2099]: E0712 00:08:27.126690 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:27.128309 kubelet[2099]: E0712 00:08:27.126716 2099 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:08:27.128309 kubelet[2099]: E0712 00:08:27.126818 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:27.128309 kubelet[2099]: E0712 00:08:27.126819 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:27.197644 kubelet[2099]: E0712 00:08:27.197597 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:27.298449 kubelet[2099]: E0712 00:08:27.298403 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:27.399261 kubelet[2099]: E0712 00:08:27.398873 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:27.499530 kubelet[2099]: E0712 00:08:27.499456 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:27.599886 kubelet[2099]: E0712 00:08:27.599833 2099 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:08:27.692233 kubelet[2099]: I0712 00:08:27.691938 2099 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:27.700948 kubelet[2099]: E0712 00:08:27.700923 2099 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:27.700948 kubelet[2099]: I0712 00:08:27.700947 2099 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:08:27.702536 kubelet[2099]: E0712 00:08:27.702469 2099 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 00:08:27.702638 kubelet[2099]: I0712 00:08:27.702618 2099 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:27.704729 kubelet[2099]: E0712 00:08:27.704697 2099 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:28.084643 kubelet[2099]: I0712 00:08:28.084319 2099 apiserver.go:52] "Watching apiserver" Jul 12 00:08:28.092915 kubelet[2099]: I0712 00:08:28.092874 2099 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:08:28.127941 kubelet[2099]: I0712 00:08:28.127874 2099 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:28.134599 kubelet[2099]: E0712 00:08:28.134561 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:29.129744 kubelet[2099]: E0712 00:08:29.129712 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:29.372350 systemd[1]: Reloading requested from client PID 2377 ('systemctl') (unit session-7.scope)... Jul 12 00:08:29.372366 systemd[1]: Reloading... Jul 12 00:08:29.431310 zram_generator::config[2419]: No configuration found. Jul 12 00:08:29.514878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:29.580158 systemd[1]: Reloading finished in 207 ms. Jul 12 00:08:29.610656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:29.624190 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:08:29.624456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:29.624513 systemd[1]: kubelet.service: Consumed 1.532s CPU time, 130.8M memory peak, 0B memory swap peak. Jul 12 00:08:29.633632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:29.735198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:29.740521 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:08:29.782407 kubelet[2458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:29.782407 kubelet[2458]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:08:29.782407 kubelet[2458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:29.782407 kubelet[2458]: I0712 00:08:29.781857 2458 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:08:29.791084 kubelet[2458]: I0712 00:08:29.791038 2458 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:08:29.791084 kubelet[2458]: I0712 00:08:29.791074 2458 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:08:29.791554 kubelet[2458]: I0712 00:08:29.791531 2458 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:08:29.793039 kubelet[2458]: I0712 00:08:29.793011 2458 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:08:29.795625 kubelet[2458]: I0712 00:08:29.795592 2458 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:08:29.798683 kubelet[2458]: E0712 00:08:29.798654 2458 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:08:29.798683 kubelet[2458]: I0712 00:08:29.798682 2458 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:08:29.802935 kubelet[2458]: I0712 00:08:29.801437 2458 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:08:29.802935 kubelet[2458]: I0712 00:08:29.801631 2458 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:08:29.802935 kubelet[2458]: I0712 00:08:29.801652 2458 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:08:29.802935 kubelet[2458]: I0712 00:08:29.801905 2458 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:08:29.803123 kubelet[2458]: I0712 00:08:29.801915 2458 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:08:29.803123 kubelet[2458]: I0712 00:08:29.801958 2458 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:29.803123 kubelet[2458]: I0712 00:08:29.802087 2458 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:08:29.803123 kubelet[2458]: I0712 00:08:29.802106 2458 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:08:29.803123 kubelet[2458]: I0712 00:08:29.802126 2458 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:08:29.803123 kubelet[2458]: I0712 00:08:29.802137 2458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:08:29.803123 kubelet[2458]: I0712 00:08:29.802684 2458 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:08:29.803691 kubelet[2458]: I0712 00:08:29.803669 2458 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:08:29.804244 kubelet[2458]: I0712 00:08:29.804222 2458 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:08:29.804377 kubelet[2458]: I0712 00:08:29.804366 2458 server.go:1287] "Started kubelet" Jul 12 00:08:29.807316 kubelet[2458]: I0712 00:08:29.805574 2458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:08:29.807316 kubelet[2458]: I0712 00:08:29.805829 2458 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:08:29.807316 kubelet[2458]: I0712 00:08:29.805879 2458 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:08:29.807316 kubelet[2458]: I0712 00:08:29.807048 2458 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:08:29.811292 kubelet[2458]: I0712 00:08:29.808131 2458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:08:29.818313 kubelet[2458]: I0712 00:08:29.809002 2458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:08:29.818446 kubelet[2458]: I0712 00:08:29.818427 2458 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:08:29.818815 kubelet[2458]: I0712 00:08:29.818792 2458 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:08:29.818959 kubelet[2458]: I0712 00:08:29.818943 2458 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:08:29.822366 kubelet[2458]: I0712 00:08:29.821490 2458 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:08:29.822366 kubelet[2458]: I0712 00:08:29.821610 2458 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:08:29.822366 kubelet[2458]: E0712 00:08:29.822146 2458 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:08:29.823029 kubelet[2458]: I0712 00:08:29.822978 2458 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:08:29.826809 kubelet[2458]: I0712 00:08:29.826758 2458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:08:29.828573 kubelet[2458]: I0712 00:08:29.828540 2458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:08:29.828573 kubelet[2458]: I0712 00:08:29.828571 2458 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:08:29.828647 kubelet[2458]: I0712 00:08:29.828592 2458 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:08:29.828647 kubelet[2458]: I0712 00:08:29.828598 2458 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:08:29.828647 kubelet[2458]: E0712 00:08:29.828643 2458 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:08:29.856365 kubelet[2458]: I0712 00:08:29.856329 2458 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:08:29.856365 kubelet[2458]: I0712 00:08:29.856351 2458 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:08:29.856365 kubelet[2458]: I0712 00:08:29.856372 2458 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:29.856552 kubelet[2458]: I0712 00:08:29.856533 2458 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:08:29.856592 kubelet[2458]: I0712 00:08:29.856551 2458 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:08:29.856592 kubelet[2458]: I0712 00:08:29.856570 2458 policy_none.go:49] "None policy: Start" Jul 12 00:08:29.856592 kubelet[2458]: I0712 00:08:29.856578 2458 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:08:29.856592 kubelet[2458]: I0712 00:08:29.856588 2458 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:08:29.856696 kubelet[2458]: I0712 00:08:29.856677 2458 state_mem.go:75] "Updated machine memory state" Jul 12 00:08:29.860362 kubelet[2458]: I0712 00:08:29.860335 2458 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:08:29.860524 kubelet[2458]: I0712 00:08:29.860496 2458 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:08:29.860556 kubelet[2458]: I0712 00:08:29.860516 2458 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:08:29.860726 kubelet[2458]: I0712 00:08:29.860702 2458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:08:29.862840 kubelet[2458]: E0712 00:08:29.862415 2458 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:08:29.929811 kubelet[2458]: I0712 00:08:29.929734 2458 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:08:29.929811 kubelet[2458]: I0712 00:08:29.929786 2458 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:29.929811 kubelet[2458]: I0712 00:08:29.929797 2458 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:29.937730 kubelet[2458]: E0712 00:08:29.937640 2458 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:29.967003 kubelet[2458]: I0712 00:08:29.966968 2458 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:08:29.976460 kubelet[2458]: I0712 00:08:29.976421 2458 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 00:08:29.976610 kubelet[2458]: I0712 00:08:29.976513 2458 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:08:30.020029 kubelet[2458]: I0712 00:08:30.019979 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:08:30.020029 kubelet[2458]: I0712 00:08:30.020024 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ec635411b676186908546c53442b4bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ec635411b676186908546c53442b4bb\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:30.020188 kubelet[2458]: I0712 00:08:30.020046 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:30.020188 kubelet[2458]: I0712 00:08:30.020064 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:30.020188 kubelet[2458]: I0712 00:08:30.020082 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:30.020188 kubelet[2458]: I0712 00:08:30.020098 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ec635411b676186908546c53442b4bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec635411b676186908546c53442b4bb\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:30.020188 kubelet[2458]: I0712 00:08:30.020113 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ec635411b676186908546c53442b4bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec635411b676186908546c53442b4bb\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:30.020328 kubelet[2458]: I0712 00:08:30.020160 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:30.020328 kubelet[2458]: I0712 00:08:30.020228 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:08:30.238426 kubelet[2458]: E0712 00:08:30.238324 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:30.238426 kubelet[2458]: E0712 00:08:30.238344 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:30.238426 kubelet[2458]: E0712 00:08:30.238406 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:30.802670 kubelet[2458]: I0712 00:08:30.802637 2458 apiserver.go:52] "Watching apiserver" Jul 12 00:08:30.819372 kubelet[2458]: I0712 00:08:30.819324 2458 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:08:30.839134 kubelet[2458]: E0712 00:08:30.839096 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:30.839578 kubelet[2458]: I0712 00:08:30.839457 2458 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:30.840987 kubelet[2458]: I0712 00:08:30.840897 2458 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:08:30.908403 kubelet[2458]: E0712 00:08:30.906690 2458 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:08:30.908403 kubelet[2458]: E0712 00:08:30.906760 2458 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 00:08:30.908403 kubelet[2458]: E0712 00:08:30.906857 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:30.908403 kubelet[2458]: E0712 00:08:30.906876 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:30.919467 kubelet[2458]: I0712 00:08:30.919377 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.919356806 podStartE2EDuration="2.919356806s" podCreationTimestamp="2025-07-12 00:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:30.906550257 +0000 UTC m=+1.162786017" watchObservedRunningTime="2025-07-12 00:08:30.919356806 +0000 UTC m=+1.175592566" Jul 12 00:08:30.919801 kubelet[2458]: I0712 00:08:30.919633 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.919626346 podStartE2EDuration="1.919626346s" podCreationTimestamp="2025-07-12 00:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:30.918005865 +0000 UTC m=+1.174241625" watchObservedRunningTime="2025-07-12 00:08:30.919626346 +0000 UTC m=+1.175862146" Jul 12 00:08:30.931630 kubelet[2458]: I0712 00:08:30.931457 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.931440727 podStartE2EDuration="1.931440727s" podCreationTimestamp="2025-07-12 00:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:30.92985842 +0000 UTC m=+1.186094140" watchObservedRunningTime="2025-07-12 00:08:30.931440727 +0000 UTC m=+1.187676487" Jul 12 00:08:31.840663 kubelet[2458]: E0712 00:08:31.840621 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:31.840993 kubelet[2458]: E0712 00:08:31.840634 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:32.842227 kubelet[2458]: E0712 00:08:32.841891 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:33.986411 kubelet[2458]: E0712 00:08:33.986370 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:35.519494 kubelet[2458]: I0712 00:08:35.519449 2458 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:08:35.519858 containerd[1440]: time="2025-07-12T00:08:35.519815812Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:08:35.520209 kubelet[2458]: I0712 00:08:35.520173 2458 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:08:36.266056 systemd[1]: Created slice kubepods-besteffort-pod076fb8b5_355f_4577_8cb5_82b159d5f79e.slice - libcontainer container kubepods-besteffort-pod076fb8b5_355f_4577_8cb5_82b159d5f79e.slice. Jul 12 00:08:36.362307 kubelet[2458]: I0712 00:08:36.362174 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/076fb8b5-355f-4577-8cb5-82b159d5f79e-xtables-lock\") pod \"kube-proxy-74298\" (UID: \"076fb8b5-355f-4577-8cb5-82b159d5f79e\") " pod="kube-system/kube-proxy-74298" Jul 12 00:08:36.362307 kubelet[2458]: I0712 00:08:36.362211 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd7b\" (UniqueName: \"kubernetes.io/projected/076fb8b5-355f-4577-8cb5-82b159d5f79e-kube-api-access-qbd7b\") pod \"kube-proxy-74298\" (UID: \"076fb8b5-355f-4577-8cb5-82b159d5f79e\") " pod="kube-system/kube-proxy-74298" Jul 12 00:08:36.362307 kubelet[2458]: I0712 00:08:36.362238 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/076fb8b5-355f-4577-8cb5-82b159d5f79e-kube-proxy\") pod \"kube-proxy-74298\" (UID: \"076fb8b5-355f-4577-8cb5-82b159d5f79e\") " pod="kube-system/kube-proxy-74298" Jul 12 00:08:36.362307 kubelet[2458]: I0712 00:08:36.362254 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/076fb8b5-355f-4577-8cb5-82b159d5f79e-lib-modules\") pod \"kube-proxy-74298\" (UID: \"076fb8b5-355f-4577-8cb5-82b159d5f79e\") " pod="kube-system/kube-proxy-74298" Jul 12 00:08:36.582381 kubelet[2458]: E0712 00:08:36.582171 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:36.583336 containerd[1440]: time="2025-07-12T00:08:36.583268973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74298,Uid:076fb8b5-355f-4577-8cb5-82b159d5f79e,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:36.607896 containerd[1440]: time="2025-07-12T00:08:36.607770317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:36.608271 containerd[1440]: time="2025-07-12T00:08:36.607931232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:36.608271 containerd[1440]: time="2025-07-12T00:08:36.607963720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:36.609003 containerd[1440]: time="2025-07-12T00:08:36.608134037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:36.636976 systemd[1]: Started cri-containerd-b4dfb48595081477ae9fccc2b8c98607d54f3bb0b0ec8ec5d75b9c60b2e8f172.scope - libcontainer container b4dfb48595081477ae9fccc2b8c98607d54f3bb0b0ec8ec5d75b9c60b2e8f172. Jul 12 00:08:36.646714 systemd[1]: Created slice kubepods-besteffort-podf2408fe6_a86e_42cc_9a37_90228518a127.slice - libcontainer container kubepods-besteffort-podf2408fe6_a86e_42cc_9a37_90228518a127.slice. Jul 12 00:08:36.664814 kubelet[2458]: I0712 00:08:36.664603 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2408fe6-a86e-42cc-9a37-90228518a127-var-lib-calico\") pod \"tigera-operator-747864d56d-qjmw9\" (UID: \"f2408fe6-a86e-42cc-9a37-90228518a127\") " pod="tigera-operator/tigera-operator-747864d56d-qjmw9" Jul 12 00:08:36.664814 kubelet[2458]: I0712 00:08:36.664643 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwllb\" (UniqueName: \"kubernetes.io/projected/f2408fe6-a86e-42cc-9a37-90228518a127-kube-api-access-rwllb\") pod \"tigera-operator-747864d56d-qjmw9\" (UID: \"f2408fe6-a86e-42cc-9a37-90228518a127\") " pod="tigera-operator/tigera-operator-747864d56d-qjmw9" Jul 12 00:08:36.665042 containerd[1440]: time="2025-07-12T00:08:36.665007628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74298,Uid:076fb8b5-355f-4577-8cb5-82b159d5f79e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4dfb48595081477ae9fccc2b8c98607d54f3bb0b0ec8ec5d75b9c60b2e8f172\"" Jul 12 00:08:36.665694 kubelet[2458]: E0712 00:08:36.665679 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:36.668513 containerd[1440]: time="2025-07-12T00:08:36.668450070Z" level=info msg="CreateContainer within sandbox \"b4dfb48595081477ae9fccc2b8c98607d54f3bb0b0ec8ec5d75b9c60b2e8f172\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:08:36.681263 containerd[1440]: time="2025-07-12T00:08:36.681217857Z" level=info msg="CreateContainer within sandbox \"b4dfb48595081477ae9fccc2b8c98607d54f3bb0b0ec8ec5d75b9c60b2e8f172\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c1aa4ddbf11e29d70c6e6b5d4087fabe299105d1e97fef51ed44b0c442c9cc7\"" Jul 12 00:08:36.681810 containerd[1440]: time="2025-07-12T00:08:36.681781181Z" level=info msg="StartContainer for \"0c1aa4ddbf11e29d70c6e6b5d4087fabe299105d1e97fef51ed44b0c442c9cc7\"" Jul 12 00:08:36.707563 systemd[1]: Started cri-containerd-0c1aa4ddbf11e29d70c6e6b5d4087fabe299105d1e97fef51ed44b0c442c9cc7.scope - libcontainer container 0c1aa4ddbf11e29d70c6e6b5d4087fabe299105d1e97fef51ed44b0c442c9cc7. Jul 12 00:08:36.737568 containerd[1440]: time="2025-07-12T00:08:36.737511559Z" level=info msg="StartContainer for \"0c1aa4ddbf11e29d70c6e6b5d4087fabe299105d1e97fef51ed44b0c442c9cc7\" returns successfully" Jul 12 00:08:36.849605 kubelet[2458]: E0712 00:08:36.848969 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:36.951290 containerd[1440]: time="2025-07-12T00:08:36.951192544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-qjmw9,Uid:f2408fe6-a86e-42cc-9a37-90228518a127,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:08:36.975734 containerd[1440]: time="2025-07-12T00:08:36.975063269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:36.975734 containerd[1440]: time="2025-07-12T00:08:36.975704051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:36.976073 containerd[1440]: time="2025-07-12T00:08:36.975898494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:36.976073 containerd[1440]: time="2025-07-12T00:08:36.976031403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:37.004507 systemd[1]: Started cri-containerd-4bf2b9ae80bc0c074a7d7f8b4b1d13327e8acab4e68f1ece67c563a30a0f69a7.scope - libcontainer container 4bf2b9ae80bc0c074a7d7f8b4b1d13327e8acab4e68f1ece67c563a30a0f69a7. Jul 12 00:08:37.038891 containerd[1440]: time="2025-07-12T00:08:37.038797436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-qjmw9,Uid:f2408fe6-a86e-42cc-9a37-90228518a127,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4bf2b9ae80bc0c074a7d7f8b4b1d13327e8acab4e68f1ece67c563a30a0f69a7\"" Jul 12 00:08:37.042120 containerd[1440]: time="2025-07-12T00:08:37.042074042Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:08:38.141905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224497021.mount: Deactivated successfully. Jul 12 00:08:38.669317 containerd[1440]: time="2025-07-12T00:08:38.669243362Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:38.669852 containerd[1440]: time="2025-07-12T00:08:38.669815715Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:08:38.670709 containerd[1440]: time="2025-07-12T00:08:38.670679806Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:38.672867 containerd[1440]: time="2025-07-12T00:08:38.672814069Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:38.674266 containerd[1440]: time="2025-07-12T00:08:38.674110086Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.631993715s" Jul 12 00:08:38.674266 containerd[1440]: time="2025-07-12T00:08:38.674146813Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:08:38.679312 containerd[1440]: time="2025-07-12T00:08:38.679261706Z" level=info msg="CreateContainer within sandbox \"4bf2b9ae80bc0c074a7d7f8b4b1d13327e8acab4e68f1ece67c563a30a0f69a7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:08:38.696703 containerd[1440]: time="2025-07-12T00:08:38.696624066Z" level=info msg="CreateContainer within sandbox \"4bf2b9ae80bc0c074a7d7f8b4b1d13327e8acab4e68f1ece67c563a30a0f69a7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9650d922bd3cf54065f0d799d9245e43bff007b8feadc29dde6337f0ab01f7f2\"" Jul 12 00:08:38.697238 containerd[1440]: time="2025-07-12T00:08:38.697102801Z" level=info msg="StartContainer for \"9650d922bd3cf54065f0d799d9245e43bff007b8feadc29dde6337f0ab01f7f2\"" Jul 12 00:08:38.723490 systemd[1]: Started cri-containerd-9650d922bd3cf54065f0d799d9245e43bff007b8feadc29dde6337f0ab01f7f2.scope - libcontainer container 9650d922bd3cf54065f0d799d9245e43bff007b8feadc29dde6337f0ab01f7f2. Jul 12 00:08:38.742827 containerd[1440]: time="2025-07-12T00:08:38.742785251Z" level=info msg="StartContainer for \"9650d922bd3cf54065f0d799d9245e43bff007b8feadc29dde6337f0ab01f7f2\" returns successfully" Jul 12 00:08:38.869094 kubelet[2458]: I0712 00:08:38.868835 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-74298" podStartSLOduration=2.868816539 podStartE2EDuration="2.868816539s" podCreationTimestamp="2025-07-12 00:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:36.874358014 +0000 UTC m=+7.130593854" watchObservedRunningTime="2025-07-12 00:08:38.868816539 +0000 UTC m=+9.125052339" Jul 12 00:08:38.869094 kubelet[2458]: I0712 00:08:38.868960 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-qjmw9" podStartSLOduration=1.2320095389999999 podStartE2EDuration="2.868954966s" podCreationTimestamp="2025-07-12 00:08:36 +0000 UTC" firstStartedPulling="2025-07-12 00:08:37.041118002 +0000 UTC m=+7.297353722" lastFinishedPulling="2025-07-12 00:08:38.678063389 +0000 UTC m=+8.934299149" observedRunningTime="2025-07-12 00:08:38.868714679 +0000 UTC m=+9.124950439" watchObservedRunningTime="2025-07-12 00:08:38.868954966 +0000 UTC m=+9.125190726" Jul 12 00:08:39.524747 kubelet[2458]: E0712 00:08:39.524306 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:39.857654 kubelet[2458]: E0712 00:08:39.857312 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:41.243557 kubelet[2458]: E0712 00:08:41.243342 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:43.995859 kubelet[2458]: E0712 00:08:43.995817 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:44.461935 sudo[1617]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:44.470440 sshd[1614]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:44.475231 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:08:44.475786 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:42386.service: Deactivated successfully. Jul 12 00:08:44.478968 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:08:44.479162 systemd[1]: session-7.scope: Consumed 6.627s CPU time, 153.7M memory peak, 0B memory swap peak. Jul 12 00:08:44.480200 systemd-logind[1421]: Removed session 7. Jul 12 00:08:46.056375 update_engine[1426]: I20250712 00:08:46.056309 1426 update_attempter.cc:509] Updating boot flags... Jul 12 00:08:46.148758 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2874) Jul 12 00:08:46.221604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2878) Jul 12 00:08:49.788039 systemd[1]: Created slice kubepods-besteffort-pod5689df89_92f2_435f_9b2a_4e843528239c.slice - libcontainer container kubepods-besteffort-pod5689df89_92f2_435f_9b2a_4e843528239c.slice. Jul 12 00:08:49.858678 kubelet[2458]: I0712 00:08:49.858621 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5689df89-92f2-435f-9b2a-4e843528239c-tigera-ca-bundle\") pod \"calico-typha-5b9cd49b55-k6npz\" (UID: \"5689df89-92f2-435f-9b2a-4e843528239c\") " pod="calico-system/calico-typha-5b9cd49b55-k6npz" Jul 12 00:08:49.858678 kubelet[2458]: I0712 00:08:49.858676 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5689df89-92f2-435f-9b2a-4e843528239c-typha-certs\") pod \"calico-typha-5b9cd49b55-k6npz\" (UID: \"5689df89-92f2-435f-9b2a-4e843528239c\") " pod="calico-system/calico-typha-5b9cd49b55-k6npz" Jul 12 00:08:49.859069 kubelet[2458]: I0712 00:08:49.858695 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczbp\" (UniqueName: \"kubernetes.io/projected/5689df89-92f2-435f-9b2a-4e843528239c-kube-api-access-xczbp\") pod \"calico-typha-5b9cd49b55-k6npz\" (UID: \"5689df89-92f2-435f-9b2a-4e843528239c\") " pod="calico-system/calico-typha-5b9cd49b55-k6npz" Jul 12 00:08:50.074550 systemd[1]: Created slice kubepods-besteffort-pod45538028_d6cc_4b73_8f99_283673e03e9b.slice - libcontainer container kubepods-besteffort-pod45538028_d6cc_4b73_8f99_283673e03e9b.slice. Jul 12 00:08:50.093351 kubelet[2458]: E0712 00:08:50.093220 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:50.093962 containerd[1440]: time="2025-07-12T00:08:50.093922240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9cd49b55-k6npz,Uid:5689df89-92f2-435f-9b2a-4e843528239c,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:50.121117 containerd[1440]: time="2025-07-12T00:08:50.120343309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:50.121539 containerd[1440]: time="2025-07-12T00:08:50.121079827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:50.121539 containerd[1440]: time="2025-07-12T00:08:50.121116071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:50.121539 containerd[1440]: time="2025-07-12T00:08:50.121219002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:50.142495 systemd[1]: Started cri-containerd-922b911913d8d1ea070cfce619d414b53799c8a06c36c4a2646248dd4d6297c8.scope - libcontainer container 922b911913d8d1ea070cfce619d414b53799c8a06c36c4a2646248dd4d6297c8. Jul 12 00:08:50.160558 kubelet[2458]: I0712 00:08:50.160515 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-cni-bin-dir\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160558 kubelet[2458]: I0712 00:08:50.160558 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-var-lib-calico\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160829 kubelet[2458]: I0712 00:08:50.160576 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-var-run-calico\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160829 kubelet[2458]: I0712 00:08:50.160593 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-xtables-lock\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160829 kubelet[2458]: I0712 00:08:50.160611 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-flexvol-driver-host\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160829 kubelet[2458]: I0712 00:08:50.160630 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcsk\" (UniqueName: \"kubernetes.io/projected/45538028-d6cc-4b73-8f99-283673e03e9b-kube-api-access-cmcsk\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160829 kubelet[2458]: I0712 00:08:50.160648 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-policysync\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160962 kubelet[2458]: I0712 00:08:50.160662 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45538028-d6cc-4b73-8f99-283673e03e9b-tigera-ca-bundle\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160962 kubelet[2458]: I0712 00:08:50.160685 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-lib-modules\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160962 kubelet[2458]: I0712 00:08:50.160703 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/45538028-d6cc-4b73-8f99-283673e03e9b-node-certs\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160962 kubelet[2458]: I0712 00:08:50.160718 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-cni-log-dir\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.160962 kubelet[2458]: I0712 00:08:50.160733 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/45538028-d6cc-4b73-8f99-283673e03e9b-cni-net-dir\") pod \"calico-node-p2v6g\" (UID: \"45538028-d6cc-4b73-8f99-283673e03e9b\") " pod="calico-system/calico-node-p2v6g" Jul 12 00:08:50.180950 containerd[1440]: time="2025-07-12T00:08:50.180896071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9cd49b55-k6npz,Uid:5689df89-92f2-435f-9b2a-4e843528239c,Namespace:calico-system,Attempt:0,} returns sandbox id \"922b911913d8d1ea070cfce619d414b53799c8a06c36c4a2646248dd4d6297c8\"" Jul 12 00:08:50.182237 kubelet[2458]: E0712 00:08:50.181935 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:50.183586 containerd[1440]: time="2025-07-12T00:08:50.183470746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:08:50.263902 kubelet[2458]: E0712 00:08:50.263788 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.263902 kubelet[2458]: W0712 00:08:50.263837 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.264403 kubelet[2458]: E0712 00:08:50.263870 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.264718 kubelet[2458]: E0712 00:08:50.264656 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.264718 kubelet[2458]: W0712 00:08:50.264673 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.264718 kubelet[2458]: E0712 00:08:50.264689 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.269171 kubelet[2458]: E0712 00:08:50.269147 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.269171 kubelet[2458]: W0712 00:08:50.269169 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.269296 kubelet[2458]: E0712 00:08:50.269188 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.273059 kubelet[2458]: E0712 00:08:50.273034 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.273059 kubelet[2458]: W0712 00:08:50.273055 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.273170 kubelet[2458]: E0712 00:08:50.273074 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.362767 kubelet[2458]: E0712 00:08:50.361760 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc77w" podUID="5145d8e7-900c-4ad8-a934-1061d118e33b" Jul 12 00:08:50.380955 containerd[1440]: time="2025-07-12T00:08:50.380348662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p2v6g,Uid:45538028-d6cc-4b73-8f99-283673e03e9b,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:50.401435 containerd[1440]: time="2025-07-12T00:08:50.401218576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:50.401853 containerd[1440]: time="2025-07-12T00:08:50.401438199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:50.401853 containerd[1440]: time="2025-07-12T00:08:50.401819920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:50.401951 containerd[1440]: time="2025-07-12T00:08:50.401912010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:50.427515 systemd[1]: Started cri-containerd-5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b.scope - libcontainer container 5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b. Jul 12 00:08:50.453565 containerd[1440]: time="2025-07-12T00:08:50.453432365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p2v6g,Uid:45538028-d6cc-4b73-8f99-283673e03e9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b\"" Jul 12 00:08:50.463431 kubelet[2458]: E0712 00:08:50.463169 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.463431 kubelet[2458]: W0712 00:08:50.463198 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.464726 kubelet[2458]: E0712 00:08:50.463232 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.464947 kubelet[2458]: E0712 00:08:50.464835 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.464947 kubelet[2458]: W0712 00:08:50.464849 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.464947 kubelet[2458]: E0712 00:08:50.464910 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.465476 kubelet[2458]: E0712 00:08:50.465414 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.465476 kubelet[2458]: W0712 00:08:50.465428 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.465725 kubelet[2458]: E0712 00:08:50.465441 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.466121 kubelet[2458]: E0712 00:08:50.465983 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.466121 kubelet[2458]: W0712 00:08:50.465998 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.466121 kubelet[2458]: E0712 00:08:50.466009 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.466335 kubelet[2458]: E0712 00:08:50.466322 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.466419 kubelet[2458]: W0712 00:08:50.466407 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.466495 kubelet[2458]: E0712 00:08:50.466484 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.467452 kubelet[2458]: E0712 00:08:50.467366 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.467452 kubelet[2458]: W0712 00:08:50.467388 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.467729 kubelet[2458]: E0712 00:08:50.467596 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.469096 kubelet[2458]: E0712 00:08:50.469054 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.469753 kubelet[2458]: W0712 00:08:50.469533 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.469753 kubelet[2458]: E0712 00:08:50.469564 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.470573 kubelet[2458]: E0712 00:08:50.470529 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.470573 kubelet[2458]: W0712 00:08:50.470545 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.470573 kubelet[2458]: E0712 00:08:50.470558 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.473290 kubelet[2458]: E0712 00:08:50.471356 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.473290 kubelet[2458]: W0712 00:08:50.471375 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.473290 kubelet[2458]: E0712 00:08:50.471389 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.473290 kubelet[2458]: E0712 00:08:50.471604 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.473290 kubelet[2458]: W0712 00:08:50.471616 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.473290 kubelet[2458]: E0712 00:08:50.471626 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.473461 kubelet[2458]: E0712 00:08:50.473429 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.473461 kubelet[2458]: W0712 00:08:50.473442 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.473461 kubelet[2458]: E0712 00:08:50.473456 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.473679 kubelet[2458]: E0712 00:08:50.473661 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.473679 kubelet[2458]: W0712 00:08:50.473675 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.473737 kubelet[2458]: E0712 00:08:50.473686 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.473898 kubelet[2458]: E0712 00:08:50.473879 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.473898 kubelet[2458]: W0712 00:08:50.473892 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.474007 kubelet[2458]: E0712 00:08:50.473902 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.474097 kubelet[2458]: E0712 00:08:50.474078 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.474097 kubelet[2458]: W0712 00:08:50.474093 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.474154 kubelet[2458]: E0712 00:08:50.474103 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.474963 kubelet[2458]: E0712 00:08:50.474933 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.474963 kubelet[2458]: W0712 00:08:50.474958 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.475025 kubelet[2458]: E0712 00:08:50.474972 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.475297 kubelet[2458]: E0712 00:08:50.475271 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.475330 kubelet[2458]: W0712 00:08:50.475294 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.475330 kubelet[2458]: E0712 00:08:50.475310 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.475520 kubelet[2458]: E0712 00:08:50.475505 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.475520 kubelet[2458]: W0712 00:08:50.475517 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.475565 kubelet[2458]: E0712 00:08:50.475526 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.476284 kubelet[2458]: E0712 00:08:50.475675 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.476284 kubelet[2458]: W0712 00:08:50.475685 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.476284 kubelet[2458]: E0712 00:08:50.475695 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.476284 kubelet[2458]: E0712 00:08:50.475841 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.476284 kubelet[2458]: W0712 00:08:50.475850 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.476284 kubelet[2458]: E0712 00:08:50.475859 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.476284 kubelet[2458]: E0712 00:08:50.476026 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.476284 kubelet[2458]: W0712 00:08:50.476035 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.476284 kubelet[2458]: E0712 00:08:50.476044 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.476826 kubelet[2458]: E0712 00:08:50.476556 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.476826 kubelet[2458]: W0712 00:08:50.476575 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.476826 kubelet[2458]: E0712 00:08:50.476591 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.476826 kubelet[2458]: I0712 00:08:50.476623 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5145d8e7-900c-4ad8-a934-1061d118e33b-kubelet-dir\") pod \"csi-node-driver-lc77w\" (UID: \"5145d8e7-900c-4ad8-a934-1061d118e33b\") " pod="calico-system/csi-node-driver-lc77w" Jul 12 00:08:50.477607 kubelet[2458]: E0712 00:08:50.477443 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.477607 kubelet[2458]: W0712 00:08:50.477464 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.477607 kubelet[2458]: E0712 00:08:50.477486 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.477607 kubelet[2458]: I0712 00:08:50.477510 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h69w\" (UniqueName: \"kubernetes.io/projected/5145d8e7-900c-4ad8-a934-1061d118e33b-kube-api-access-5h69w\") pod \"csi-node-driver-lc77w\" (UID: \"5145d8e7-900c-4ad8-a934-1061d118e33b\") " pod="calico-system/csi-node-driver-lc77w" Jul 12 00:08:50.478364 kubelet[2458]: E0712 00:08:50.478344 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.478364 kubelet[2458]: W0712 00:08:50.478361 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.478456 kubelet[2458]: E0712 00:08:50.478382 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.478456 kubelet[2458]: I0712 00:08:50.478403 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5145d8e7-900c-4ad8-a934-1061d118e33b-socket-dir\") pod \"csi-node-driver-lc77w\" (UID: \"5145d8e7-900c-4ad8-a934-1061d118e33b\") " pod="calico-system/csi-node-driver-lc77w" Jul 12 00:08:50.479379 kubelet[2458]: E0712 00:08:50.479021 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.479379 kubelet[2458]: W0712 00:08:50.479049 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.479379 kubelet[2458]: E0712 00:08:50.479069 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.479914 kubelet[2458]: E0712 00:08:50.479894 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.479977 kubelet[2458]: W0712 00:08:50.479921 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.480077 kubelet[2458]: E0712 00:08:50.480035 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.480190 kubelet[2458]: E0712 00:08:50.480178 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.480231 kubelet[2458]: W0712 00:08:50.480191 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.480313 kubelet[2458]: E0712 00:08:50.480268 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.480974 kubelet[2458]: E0712 00:08:50.480954 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.480974 kubelet[2458]: W0712 00:08:50.480972 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.481070 kubelet[2458]: E0712 00:08:50.481008 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.481070 kubelet[2458]: I0712 00:08:50.481052 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5145d8e7-900c-4ad8-a934-1061d118e33b-registration-dir\") pod \"csi-node-driver-lc77w\" (UID: \"5145d8e7-900c-4ad8-a934-1061d118e33b\") " pod="calico-system/csi-node-driver-lc77w" Jul 12 00:08:50.481554 kubelet[2458]: E0712 00:08:50.481319 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.481554 kubelet[2458]: W0712 00:08:50.481330 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.481554 kubelet[2458]: E0712 00:08:50.481377 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.481916 kubelet[2458]: E0712 00:08:50.481882 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.481916 kubelet[2458]: W0712 00:08:50.481899 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.481916 kubelet[2458]: E0712 00:08:50.481913 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.482313 kubelet[2458]: E0712 00:08:50.482154 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.482313 kubelet[2458]: W0712 00:08:50.482167 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.482313 kubelet[2458]: E0712 00:08:50.482182 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.482548 kubelet[2458]: E0712 00:08:50.482504 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.482548 kubelet[2458]: W0712 00:08:50.482520 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.482548 kubelet[2458]: E0712 00:08:50.482535 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.483177 kubelet[2458]: E0712 00:08:50.483133 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.483177 kubelet[2458]: W0712 00:08:50.483148 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.483177 kubelet[2458]: E0712 00:08:50.483161 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.483716 kubelet[2458]: E0712 00:08:50.483605 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.483716 kubelet[2458]: W0712 00:08:50.483618 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.483716 kubelet[2458]: E0712 00:08:50.483659 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.484027 kubelet[2458]: I0712 00:08:50.483694 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5145d8e7-900c-4ad8-a934-1061d118e33b-varrun\") pod \"csi-node-driver-lc77w\" (UID: \"5145d8e7-900c-4ad8-a934-1061d118e33b\") " pod="calico-system/csi-node-driver-lc77w" Jul 12 00:08:50.484378 kubelet[2458]: E0712 00:08:50.484194 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.484378 kubelet[2458]: W0712 00:08:50.484208 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.484378 kubelet[2458]: E0712 00:08:50.484221 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.484610 kubelet[2458]: E0712 00:08:50.484597 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.484673 kubelet[2458]: W0712 00:08:50.484655 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.485271 kubelet[2458]: E0712 00:08:50.485248 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.585469 kubelet[2458]: E0712 00:08:50.585440 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.585469 kubelet[2458]: W0712 00:08:50.585462 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.585697 kubelet[2458]: E0712 00:08:50.585484 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.585697 kubelet[2458]: E0712 00:08:50.585662 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.585697 kubelet[2458]: W0712 00:08:50.585671 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.585697 kubelet[2458]: E0712 00:08:50.585687 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.585935 kubelet[2458]: E0712 00:08:50.585920 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.585935 kubelet[2458]: W0712 00:08:50.585932 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.586093 kubelet[2458]: E0712 00:08:50.585945 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.586159 kubelet[2458]: E0712 00:08:50.586146 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.586159 kubelet[2458]: W0712 00:08:50.586157 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.586217 kubelet[2458]: E0712 00:08:50.586170 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.586377 kubelet[2458]: E0712 00:08:50.586366 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.586377 kubelet[2458]: W0712 00:08:50.586377 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.586444 kubelet[2458]: E0712 00:08:50.586394 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.586619 kubelet[2458]: E0712 00:08:50.586607 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.586647 kubelet[2458]: W0712 00:08:50.586619 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.586647 kubelet[2458]: E0712 00:08:50.586631 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.588505 kubelet[2458]: E0712 00:08:50.588487 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.588505 kubelet[2458]: W0712 00:08:50.588503 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.588613 kubelet[2458]: E0712 00:08:50.588596 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.588720 kubelet[2458]: E0712 00:08:50.588710 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.588720 kubelet[2458]: W0712 00:08:50.588719 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.588812 kubelet[2458]: E0712 00:08:50.588799 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.588887 kubelet[2458]: E0712 00:08:50.588875 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.588887 kubelet[2458]: W0712 00:08:50.588885 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.588939 kubelet[2458]: E0712 00:08:50.588913 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.589074 kubelet[2458]: E0712 00:08:50.589051 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.589074 kubelet[2458]: W0712 00:08:50.589062 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.589139 kubelet[2458]: E0712 00:08:50.589129 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.589257 kubelet[2458]: E0712 00:08:50.589246 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.589315 kubelet[2458]: W0712 00:08:50.589258 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.589315 kubelet[2458]: E0712 00:08:50.589290 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.589447 kubelet[2458]: E0712 00:08:50.589436 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.589447 kubelet[2458]: W0712 00:08:50.589445 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.589507 kubelet[2458]: E0712 00:08:50.589457 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.589653 kubelet[2458]: E0712 00:08:50.589642 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.589653 kubelet[2458]: W0712 00:08:50.589653 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.589724 kubelet[2458]: E0712 00:08:50.589665 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.590203 kubelet[2458]: E0712 00:08:50.590179 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.590203 kubelet[2458]: W0712 00:08:50.590195 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.590300 kubelet[2458]: E0712 00:08:50.590217 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.591381 kubelet[2458]: E0712 00:08:50.590487 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.591381 kubelet[2458]: W0712 00:08:50.590502 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.591381 kubelet[2458]: E0712 00:08:50.590600 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.591381 kubelet[2458]: E0712 00:08:50.590747 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.591381 kubelet[2458]: W0712 00:08:50.590756 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.591381 kubelet[2458]: E0712 00:08:50.590852 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.591381 kubelet[2458]: E0712 00:08:50.591020 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.591381 kubelet[2458]: W0712 00:08:50.591029 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.591381 kubelet[2458]: E0712 00:08:50.591128 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.591381 kubelet[2458]: E0712 00:08:50.591238 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.591753 kubelet[2458]: W0712 00:08:50.591246 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.591753 kubelet[2458]: E0712 00:08:50.591300 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.591753 kubelet[2458]: E0712 00:08:50.591541 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.591753 kubelet[2458]: W0712 00:08:50.591553 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.591753 kubelet[2458]: E0712 00:08:50.591573 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.592643 kubelet[2458]: E0712 00:08:50.592606 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.592643 kubelet[2458]: W0712 00:08:50.592627 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.592864 kubelet[2458]: E0712 00:08:50.592840 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.593514 kubelet[2458]: E0712 00:08:50.593500 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.593560 kubelet[2458]: W0712 00:08:50.593547 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.593606 kubelet[2458]: E0712 00:08:50.593568 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.593837 kubelet[2458]: E0712 00:08:50.593820 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.593837 kubelet[2458]: W0712 00:08:50.593837 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.593907 kubelet[2458]: E0712 00:08:50.593899 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.594089 kubelet[2458]: E0712 00:08:50.594064 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.594117 kubelet[2458]: W0712 00:08:50.594090 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.594213 kubelet[2458]: E0712 00:08:50.594166 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.594377 kubelet[2458]: E0712 00:08:50.594358 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.594377 kubelet[2458]: W0712 00:08:50.594376 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.594449 kubelet[2458]: E0712 00:08:50.594392 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.594919 kubelet[2458]: E0712 00:08:50.594904 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.594965 kubelet[2458]: W0712 00:08:50.594919 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.594965 kubelet[2458]: E0712 00:08:50.594944 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:50.601308 kubelet[2458]: E0712 00:08:50.601285 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:50.601308 kubelet[2458]: W0712 00:08:50.601304 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:50.601422 kubelet[2458]: E0712 00:08:50.601320 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:51.149588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394467091.mount: Deactivated successfully. Jul 12 00:08:51.829960 kubelet[2458]: E0712 00:08:51.829871 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc77w" podUID="5145d8e7-900c-4ad8-a934-1061d118e33b" Jul 12 00:08:52.132709 containerd[1440]: time="2025-07-12T00:08:52.132578685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:52.137965 containerd[1440]: time="2025-07-12T00:08:52.137204496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:08:52.141472 containerd[1440]: time="2025-07-12T00:08:52.141433989Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:52.147711 containerd[1440]: time="2025-07-12T00:08:52.147567948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:52.148775 containerd[1440]: time="2025-07-12T00:08:52.148730341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.965207989s" Jul 12 00:08:52.148775 containerd[1440]: time="2025-07-12T00:08:52.148773425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:08:52.150296 containerd[1440]: time="2025-07-12T00:08:52.150206605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:08:52.162576 containerd[1440]: time="2025-07-12T00:08:52.162100126Z" level=info msg="CreateContainer within sandbox \"922b911913d8d1ea070cfce619d414b53799c8a06c36c4a2646248dd4d6297c8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:08:52.199661 containerd[1440]: time="2025-07-12T00:08:52.199617829Z" level=info msg="CreateContainer within sandbox \"922b911913d8d1ea070cfce619d414b53799c8a06c36c4a2646248dd4d6297c8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e85f036a64ae8e697b5d96cfdd2b478c649933ecde79577a74127a7e2333db73\"" Jul 12 00:08:52.200136 containerd[1440]: time="2025-07-12T00:08:52.200104156Z" level=info msg="StartContainer for \"e85f036a64ae8e697b5d96cfdd2b478c649933ecde79577a74127a7e2333db73\"" Jul 12 00:08:52.229475 systemd[1]: Started cri-containerd-e85f036a64ae8e697b5d96cfdd2b478c649933ecde79577a74127a7e2333db73.scope - libcontainer container e85f036a64ae8e697b5d96cfdd2b478c649933ecde79577a74127a7e2333db73. Jul 12 00:08:52.265210 containerd[1440]: time="2025-07-12T00:08:52.265170588Z" level=info msg="StartContainer for \"e85f036a64ae8e697b5d96cfdd2b478c649933ecde79577a74127a7e2333db73\" returns successfully" Jul 12 00:08:52.891762 kubelet[2458]: E0712 00:08:52.891732 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:52.894236 kubelet[2458]: E0712 00:08:52.893691 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.894236 kubelet[2458]: W0712 00:08:52.893713 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.894236 kubelet[2458]: E0712 00:08:52.893731 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.894236 kubelet[2458]: E0712 00:08:52.893988 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.894236 kubelet[2458]: W0712 00:08:52.894087 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.894236 kubelet[2458]: E0712 00:08:52.894104 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.895505 kubelet[2458]: E0712 00:08:52.894345 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.895505 kubelet[2458]: W0712 00:08:52.894355 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.895505 kubelet[2458]: E0712 00:08:52.894365 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.895791 kubelet[2458]: E0712 00:08:52.895772 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.895835 kubelet[2458]: W0712 00:08:52.895792 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.895835 kubelet[2458]: E0712 00:08:52.895806 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.896229 kubelet[2458]: E0712 00:08:52.896170 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.896229 kubelet[2458]: W0712 00:08:52.896214 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.896229 kubelet[2458]: E0712 00:08:52.896227 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.896421 kubelet[2458]: E0712 00:08:52.896407 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.896421 kubelet[2458]: W0712 00:08:52.896419 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.896477 kubelet[2458]: E0712 00:08:52.896428 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.896586 kubelet[2458]: E0712 00:08:52.896569 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.896616 kubelet[2458]: W0712 00:08:52.896586 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.896616 kubelet[2458]: E0712 00:08:52.896595 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.896747 kubelet[2458]: E0712 00:08:52.896730 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.896778 kubelet[2458]: W0712 00:08:52.896748 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.896778 kubelet[2458]: E0712 00:08:52.896757 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.896951 kubelet[2458]: E0712 00:08:52.896938 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.896951 kubelet[2458]: W0712 00:08:52.896950 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.897011 kubelet[2458]: E0712 00:08:52.896959 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.897116 kubelet[2458]: E0712 00:08:52.897104 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.897116 kubelet[2458]: W0712 00:08:52.897115 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.897169 kubelet[2458]: E0712 00:08:52.897123 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.897279 kubelet[2458]: E0712 00:08:52.897256 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.897309 kubelet[2458]: W0712 00:08:52.897280 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.897309 kubelet[2458]: E0712 00:08:52.897290 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.897449 kubelet[2458]: E0712 00:08:52.897434 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.897477 kubelet[2458]: W0712 00:08:52.897450 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.897477 kubelet[2458]: E0712 00:08:52.897459 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.897607 kubelet[2458]: E0712 00:08:52.897594 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.897636 kubelet[2458]: W0712 00:08:52.897611 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.897636 kubelet[2458]: E0712 00:08:52.897620 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.897756 kubelet[2458]: E0712 00:08:52.897743 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.897792 kubelet[2458]: W0712 00:08:52.897760 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.897792 kubelet[2458]: E0712 00:08:52.897769 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.897911 kubelet[2458]: E0712 00:08:52.897899 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.897940 kubelet[2458]: W0712 00:08:52.897916 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.897940 kubelet[2458]: E0712 00:08:52.897924 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.903245 kubelet[2458]: I0712 00:08:52.903161 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b9cd49b55-k6npz" podStartSLOduration=1.935974758 podStartE2EDuration="3.903132424s" podCreationTimestamp="2025-07-12 00:08:49 +0000 UTC" firstStartedPulling="2025-07-12 00:08:50.182868922 +0000 UTC m=+20.439104682" lastFinishedPulling="2025-07-12 00:08:52.150026588 +0000 UTC m=+22.406262348" observedRunningTime="2025-07-12 00:08:52.902460079 +0000 UTC m=+23.158695919" watchObservedRunningTime="2025-07-12 00:08:52.903132424 +0000 UTC m=+23.159368184" Jul 12 00:08:52.907472 kubelet[2458]: E0712 00:08:52.907443 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.907472 kubelet[2458]: W0712 00:08:52.907466 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.907616 kubelet[2458]: E0712 00:08:52.907486 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.907726 kubelet[2458]: E0712 00:08:52.907713 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.907726 kubelet[2458]: W0712 00:08:52.907725 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.907787 kubelet[2458]: E0712 00:08:52.907739 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.907992 kubelet[2458]: E0712 00:08:52.907977 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.907992 kubelet[2458]: W0712 00:08:52.907989 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.908060 kubelet[2458]: E0712 00:08:52.908004 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.908212 kubelet[2458]: E0712 00:08:52.908198 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.908212 kubelet[2458]: W0712 00:08:52.908210 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.908286 kubelet[2458]: E0712 00:08:52.908224 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.908398 kubelet[2458]: E0712 00:08:52.908377 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.908398 kubelet[2458]: W0712 00:08:52.908388 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.908398 kubelet[2458]: E0712 00:08:52.908397 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.908548 kubelet[2458]: E0712 00:08:52.908537 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.908548 kubelet[2458]: W0712 00:08:52.908547 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.908727 kubelet[2458]: E0712 00:08:52.908560 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.908822 kubelet[2458]: E0712 00:08:52.908805 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.908878 kubelet[2458]: W0712 00:08:52.908866 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.909025 kubelet[2458]: E0712 00:08:52.908927 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.909224 kubelet[2458]: E0712 00:08:52.909117 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.909224 kubelet[2458]: W0712 00:08:52.909131 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.909224 kubelet[2458]: E0712 00:08:52.909152 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.909395 kubelet[2458]: E0712 00:08:52.909382 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.909452 kubelet[2458]: W0712 00:08:52.909441 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.909528 kubelet[2458]: E0712 00:08:52.909508 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.909739 kubelet[2458]: E0712 00:08:52.909725 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.909809 kubelet[2458]: W0712 00:08:52.909798 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.909872 kubelet[2458]: E0712 00:08:52.909861 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.910050 kubelet[2458]: E0712 00:08:52.910028 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.910050 kubelet[2458]: W0712 00:08:52.910040 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.910106 kubelet[2458]: E0712 00:08:52.910058 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.910254 kubelet[2458]: E0712 00:08:52.910240 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.910254 kubelet[2458]: W0712 00:08:52.910250 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.910332 kubelet[2458]: E0712 00:08:52.910262 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.910604 kubelet[2458]: E0712 00:08:52.910588 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.910677 kubelet[2458]: W0712 00:08:52.910664 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.910807 kubelet[2458]: E0712 00:08:52.910726 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.911022 kubelet[2458]: E0712 00:08:52.910894 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.911022 kubelet[2458]: W0712 00:08:52.910906 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.911022 kubelet[2458]: E0712 00:08:52.910920 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.911197 kubelet[2458]: E0712 00:08:52.911167 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.911291 kubelet[2458]: W0712 00:08:52.911257 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.911366 kubelet[2458]: E0712 00:08:52.911353 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.912393 kubelet[2458]: E0712 00:08:52.912374 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.912469 kubelet[2458]: W0712 00:08:52.912456 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.912545 kubelet[2458]: E0712 00:08:52.912533 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.912986 kubelet[2458]: E0712 00:08:52.912904 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.912986 kubelet[2458]: W0712 00:08:52.912924 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.912986 kubelet[2458]: E0712 00:08:52.912946 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:52.913673 kubelet[2458]: E0712 00:08:52.913646 2458 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:52.913673 kubelet[2458]: W0712 00:08:52.913662 2458 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:52.913673 kubelet[2458]: E0712 00:08:52.913675 2458 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:53.016700 containerd[1440]: time="2025-07-12T00:08:53.016649520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:53.017560 containerd[1440]: time="2025-07-12T00:08:53.017528962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:08:53.018074 containerd[1440]: time="2025-07-12T00:08:53.018053131Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:53.020124 containerd[1440]: time="2025-07-12T00:08:53.020070079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:53.021025 containerd[1440]: time="2025-07-12T00:08:53.020979604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 869.829066ms" Jul 12 00:08:53.021025 containerd[1440]: time="2025-07-12T00:08:53.021020088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:08:53.023193 containerd[1440]: time="2025-07-12T00:08:53.023058238Z" level=info msg="CreateContainer within sandbox \"5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:08:53.033920 containerd[1440]: time="2025-07-12T00:08:53.033783159Z" level=info msg="CreateContainer within sandbox \"5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda\"" Jul 12 00:08:53.034967 containerd[1440]: time="2025-07-12T00:08:53.034326090Z" level=info msg="StartContainer for \"7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda\"" Jul 12 00:08:53.058452 systemd[1]: Started cri-containerd-7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda.scope - libcontainer container 7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda. Jul 12 00:08:53.099731 containerd[1440]: time="2025-07-12T00:08:53.099611704Z" level=info msg="StartContainer for \"7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda\" returns successfully" Jul 12 00:08:53.106072 systemd[1]: cri-containerd-7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda.scope: Deactivated successfully. Jul 12 00:08:53.142635 containerd[1440]: time="2025-07-12T00:08:53.138831684Z" level=info msg="shim disconnected" id=7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda namespace=k8s.io Jul 12 00:08:53.142635 containerd[1440]: time="2025-07-12T00:08:53.142560912Z" level=warning msg="cleaning up after shim disconnected" id=7923fb7c54f58e9fbac0c3e88323f373b0fee800384535d4ebed6650ed863eda namespace=k8s.io Jul 12 00:08:53.142635 containerd[1440]: time="2025-07-12T00:08:53.142575594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:53.153372 containerd[1440]: time="2025-07-12T00:08:53.153322197Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:08:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:08:53.829723 kubelet[2458]: E0712 00:08:53.829664 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc77w" podUID="5145d8e7-900c-4ad8-a934-1061d118e33b" Jul 12 00:08:53.895115 kubelet[2458]: I0712 00:08:53.894599 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:53.895485 kubelet[2458]: E0712 00:08:53.895173 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:53.896705 containerd[1440]: time="2025-07-12T00:08:53.896674181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:08:55.832826 kubelet[2458]: E0712 00:08:55.829451 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc77w" podUID="5145d8e7-900c-4ad8-a934-1061d118e33b" Jul 12 00:08:56.600651 containerd[1440]: time="2025-07-12T00:08:56.600592834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:56.601152 containerd[1440]: time="2025-07-12T00:08:56.601108396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:08:56.601887 containerd[1440]: time="2025-07-12T00:08:56.601853537Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:56.604599 containerd[1440]: time="2025-07-12T00:08:56.604556959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:56.605054 containerd[1440]: time="2025-07-12T00:08:56.605020197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.708307452s" Jul 12 00:08:56.605054 containerd[1440]: time="2025-07-12T00:08:56.605050399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:08:56.610053 containerd[1440]: time="2025-07-12T00:08:56.609997965Z" level=info msg="CreateContainer within sandbox \"5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:08:56.620941 containerd[1440]: time="2025-07-12T00:08:56.620879698Z" level=info msg="CreateContainer within sandbox \"5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3\"" Jul 12 00:08:56.621537 containerd[1440]: time="2025-07-12T00:08:56.621500789Z" level=info msg="StartContainer for \"9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3\"" Jul 12 00:08:56.644685 systemd[1]: run-containerd-runc-k8s.io-9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3-runc.x0vvs6.mount: Deactivated successfully. Jul 12 00:08:56.665452 systemd[1]: Started cri-containerd-9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3.scope - libcontainer container 9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3. Jul 12 00:08:56.748963 containerd[1440]: time="2025-07-12T00:08:56.748899881Z" level=info msg="StartContainer for \"9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3\" returns successfully" Jul 12 00:08:57.308509 systemd[1]: cri-containerd-9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3.scope: Deactivated successfully. Jul 12 00:08:57.331306 containerd[1440]: time="2025-07-12T00:08:57.331240809Z" level=info msg="shim disconnected" id=9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3 namespace=k8s.io Jul 12 00:08:57.331522 containerd[1440]: time="2025-07-12T00:08:57.331501869Z" level=warning msg="cleaning up after shim disconnected" id=9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3 namespace=k8s.io Jul 12 00:08:57.331608 containerd[1440]: time="2025-07-12T00:08:57.331594157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:57.381096 kubelet[2458]: I0712 00:08:57.381048 2458 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:08:57.432590 systemd[1]: Created slice kubepods-besteffort-pod59cf923d_026d_4707_961e_6d99594b8482.slice - libcontainer container kubepods-besteffort-pod59cf923d_026d_4707_961e_6d99594b8482.slice. Jul 12 00:08:57.454807 kubelet[2458]: I0712 00:08:57.454750 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4163ae3e-c117-44b8-afd3-05959bb3dc8f-config\") pod \"goldmane-768f4c5c69-t657p\" (UID: \"4163ae3e-c117-44b8-afd3-05959bb3dc8f\") " pod="calico-system/goldmane-768f4c5c69-t657p" Jul 12 00:08:57.454807 kubelet[2458]: I0712 00:08:57.454800 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvf7\" (UniqueName: \"kubernetes.io/projected/8de351fb-c5ee-4b34-82bf-ce57122f3ecf-kube-api-access-xgvf7\") pod \"calico-apiserver-dd4964cc-p2r92\" (UID: \"8de351fb-c5ee-4b34-82bf-ce57122f3ecf\") " pod="calico-apiserver/calico-apiserver-dd4964cc-p2r92" Jul 12 00:08:57.454986 kubelet[2458]: I0712 00:08:57.454823 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9gm\" (UniqueName: \"kubernetes.io/projected/de5af1cf-2d96-4cd0-b664-9eb849bca08f-kube-api-access-xz9gm\") pod \"calico-apiserver-dd4964cc-mvr8g\" (UID: \"de5af1cf-2d96-4cd0-b664-9eb849bca08f\") " pod="calico-apiserver/calico-apiserver-dd4964cc-mvr8g" Jul 12 00:08:57.454986 kubelet[2458]: I0712 00:08:57.454838 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52d6\" (UniqueName: \"kubernetes.io/projected/59cf923d-026d-4707-961e-6d99594b8482-kube-api-access-h52d6\") pod \"whisker-b95bdc6c4-c5hvb\" (UID: \"59cf923d-026d-4707-961e-6d99594b8482\") " pod="calico-system/whisker-b95bdc6c4-c5hvb" Jul 12 00:08:57.454986 kubelet[2458]: I0712 00:08:57.454854 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fzm\" (UniqueName: \"kubernetes.io/projected/4163ae3e-c117-44b8-afd3-05959bb3dc8f-kube-api-access-z5fzm\") pod \"goldmane-768f4c5c69-t657p\" (UID: \"4163ae3e-c117-44b8-afd3-05959bb3dc8f\") " pod="calico-system/goldmane-768f4c5c69-t657p" Jul 12 00:08:57.454986 kubelet[2458]: I0712 00:08:57.454869 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eb7bd0f-4915-42cf-9421-dbbd624c0e64-config-volume\") pod \"coredns-668d6bf9bc-ltzl2\" (UID: \"6eb7bd0f-4915-42cf-9421-dbbd624c0e64\") " pod="kube-system/coredns-668d6bf9bc-ltzl2" Jul 12 00:08:57.454986 kubelet[2458]: I0712 00:08:57.454887 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7m6\" (UniqueName: \"kubernetes.io/projected/e479b3c4-2d61-4209-8ff7-602a3f90e035-kube-api-access-hz7m6\") pod \"coredns-668d6bf9bc-m9x52\" (UID: \"e479b3c4-2d61-4209-8ff7-602a3f90e035\") " pod="kube-system/coredns-668d6bf9bc-m9x52" Jul 12 00:08:57.455146 kubelet[2458]: I0712 00:08:57.454901 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqnj\" (UniqueName: \"kubernetes.io/projected/6eb7bd0f-4915-42cf-9421-dbbd624c0e64-kube-api-access-nqqnj\") pod \"coredns-668d6bf9bc-ltzl2\" (UID: \"6eb7bd0f-4915-42cf-9421-dbbd624c0e64\") " pod="kube-system/coredns-668d6bf9bc-ltzl2" Jul 12 00:08:57.455146 kubelet[2458]: I0712 00:08:57.454919 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4163ae3e-c117-44b8-afd3-05959bb3dc8f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-t657p\" (UID: \"4163ae3e-c117-44b8-afd3-05959bb3dc8f\") " pod="calico-system/goldmane-768f4c5c69-t657p" Jul 12 00:08:57.455146 kubelet[2458]: I0712 00:08:57.454934 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e479b3c4-2d61-4209-8ff7-602a3f90e035-config-volume\") pod \"coredns-668d6bf9bc-m9x52\" (UID: \"e479b3c4-2d61-4209-8ff7-602a3f90e035\") " pod="kube-system/coredns-668d6bf9bc-m9x52" Jul 12 00:08:57.455146 kubelet[2458]: I0712 00:08:57.454953 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de5af1cf-2d96-4cd0-b664-9eb849bca08f-calico-apiserver-certs\") pod \"calico-apiserver-dd4964cc-mvr8g\" (UID: \"de5af1cf-2d96-4cd0-b664-9eb849bca08f\") " pod="calico-apiserver/calico-apiserver-dd4964cc-mvr8g" Jul 12 00:08:57.455146 kubelet[2458]: I0712 00:08:57.454973 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtmb\" (UniqueName: \"kubernetes.io/projected/f1339e87-f0d0-41f4-9691-3d9be7937b47-kube-api-access-7mtmb\") pod \"calico-kube-controllers-748bcb9cdf-s67t2\" (UID: \"f1339e87-f0d0-41f4-9691-3d9be7937b47\") " pod="calico-system/calico-kube-controllers-748bcb9cdf-s67t2" Jul 12 00:08:57.455296 kubelet[2458]: I0712 00:08:57.454990 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4163ae3e-c117-44b8-afd3-05959bb3dc8f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-t657p\" (UID: \"4163ae3e-c117-44b8-afd3-05959bb3dc8f\") " pod="calico-system/goldmane-768f4c5c69-t657p" Jul 12 00:08:57.455296 kubelet[2458]: I0712 00:08:57.455005 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8de351fb-c5ee-4b34-82bf-ce57122f3ecf-calico-apiserver-certs\") pod \"calico-apiserver-dd4964cc-p2r92\" (UID: \"8de351fb-c5ee-4b34-82bf-ce57122f3ecf\") " pod="calico-apiserver/calico-apiserver-dd4964cc-p2r92" Jul 12 00:08:57.455296 kubelet[2458]: I0712 00:08:57.455025 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1339e87-f0d0-41f4-9691-3d9be7937b47-tigera-ca-bundle\") pod \"calico-kube-controllers-748bcb9cdf-s67t2\" (UID: \"f1339e87-f0d0-41f4-9691-3d9be7937b47\") " pod="calico-system/calico-kube-controllers-748bcb9cdf-s67t2" Jul 12 00:08:57.455296 kubelet[2458]: I0712 00:08:57.455040 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59cf923d-026d-4707-961e-6d99594b8482-whisker-backend-key-pair\") pod \"whisker-b95bdc6c4-c5hvb\" (UID: \"59cf923d-026d-4707-961e-6d99594b8482\") " pod="calico-system/whisker-b95bdc6c4-c5hvb" Jul 12 00:08:57.455296 kubelet[2458]: I0712 00:08:57.455055 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59cf923d-026d-4707-961e-6d99594b8482-whisker-ca-bundle\") pod \"whisker-b95bdc6c4-c5hvb\" (UID: \"59cf923d-026d-4707-961e-6d99594b8482\") " pod="calico-system/whisker-b95bdc6c4-c5hvb" Jul 12 00:08:57.459296 systemd[1]: Created slice kubepods-besteffort-pod8de351fb_c5ee_4b34_82bf_ce57122f3ecf.slice - libcontainer container kubepods-besteffort-pod8de351fb_c5ee_4b34_82bf_ce57122f3ecf.slice. Jul 12 00:08:57.467211 systemd[1]: Created slice kubepods-besteffort-pod4163ae3e_c117_44b8_afd3_05959bb3dc8f.slice - libcontainer container kubepods-besteffort-pod4163ae3e_c117_44b8_afd3_05959bb3dc8f.slice. Jul 12 00:08:57.482816 systemd[1]: Created slice kubepods-burstable-pode479b3c4_2d61_4209_8ff7_602a3f90e035.slice - libcontainer container kubepods-burstable-pode479b3c4_2d61_4209_8ff7_602a3f90e035.slice. Jul 12 00:08:57.488780 systemd[1]: Created slice kubepods-besteffort-podde5af1cf_2d96_4cd0_b664_9eb849bca08f.slice - libcontainer container kubepods-besteffort-podde5af1cf_2d96_4cd0_b664_9eb849bca08f.slice. Jul 12 00:08:57.496917 systemd[1]: Created slice kubepods-besteffort-podf1339e87_f0d0_41f4_9691_3d9be7937b47.slice - libcontainer container kubepods-besteffort-podf1339e87_f0d0_41f4_9691_3d9be7937b47.slice. Jul 12 00:08:57.500948 systemd[1]: Created slice kubepods-burstable-pod6eb7bd0f_4915_42cf_9421_dbbd624c0e64.slice - libcontainer container kubepods-burstable-pod6eb7bd0f_4915_42cf_9421_dbbd624c0e64.slice. Jul 12 00:08:57.628807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fed3acd6f9f2f8d8dc9c0e2b3352394a284df537b57a667e6242a2e8d34dfd3-rootfs.mount: Deactivated successfully. Jul 12 00:08:57.737567 containerd[1440]: time="2025-07-12T00:08:57.737527480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b95bdc6c4-c5hvb,Uid:59cf923d-026d-4707-961e-6d99594b8482,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:57.764726 containerd[1440]: time="2025-07-12T00:08:57.764671777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-p2r92,Uid:8de351fb-c5ee-4b34-82bf-ce57122f3ecf,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:08:57.775207 containerd[1440]: time="2025-07-12T00:08:57.775175244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t657p,Uid:4163ae3e-c117-44b8-afd3-05959bb3dc8f,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:57.786590 kubelet[2458]: E0712 00:08:57.786552 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:57.788521 containerd[1440]: time="2025-07-12T00:08:57.787803518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9x52,Uid:e479b3c4-2d61-4209-8ff7-602a3f90e035,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:57.793780 containerd[1440]: time="2025-07-12T00:08:57.793739226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-mvr8g,Uid:de5af1cf-2d96-4cd0-b664-9eb849bca08f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:08:57.803480 containerd[1440]: time="2025-07-12T00:08:57.803448990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748bcb9cdf-s67t2,Uid:f1339e87-f0d0-41f4-9691-3d9be7937b47,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:57.805176 kubelet[2458]: E0712 00:08:57.805147 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:08:57.805857 containerd[1440]: time="2025-07-12T00:08:57.805726290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ltzl2,Uid:6eb7bd0f-4915-42cf-9421-dbbd624c0e64,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:57.886915 systemd[1]: Created slice kubepods-besteffort-pod5145d8e7_900c_4ad8_a934_1061d118e33b.slice - libcontainer container kubepods-besteffort-pod5145d8e7_900c_4ad8_a934_1061d118e33b.slice. Jul 12 00:08:57.934102 containerd[1440]: time="2025-07-12T00:08:57.933588557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:08:57.934363 containerd[1440]: time="2025-07-12T00:08:57.934138001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc77w,Uid:5145d8e7-900c-4ad8-a934-1061d118e33b,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:58.248795 containerd[1440]: time="2025-07-12T00:08:58.248723884Z" level=error msg="Failed to destroy network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.249106 containerd[1440]: time="2025-07-12T00:08:58.249075710Z" level=error msg="encountered an error cleaning up failed sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.249159 containerd[1440]: time="2025-07-12T00:08:58.249129994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc77w,Uid:5145d8e7-900c-4ad8-a934-1061d118e33b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.251131 kubelet[2458]: E0712 00:08:58.251070 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.255075 kubelet[2458]: E0712 00:08:58.255021 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lc77w" Jul 12 00:08:58.255184 kubelet[2458]: E0712 00:08:58.255080 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lc77w" Jul 12 00:08:58.255184 kubelet[2458]: E0712 00:08:58.255150 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lc77w_calico-system(5145d8e7-900c-4ad8-a934-1061d118e33b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lc77w_calico-system(5145d8e7-900c-4ad8-a934-1061d118e33b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lc77w" podUID="5145d8e7-900c-4ad8-a934-1061d118e33b" Jul 12 00:08:58.263119 containerd[1440]: time="2025-07-12T00:08:58.263072889Z" level=error msg="Failed to destroy network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.263469 containerd[1440]: time="2025-07-12T00:08:58.263370832Z" level=error msg="Failed to destroy network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.263926 containerd[1440]: time="2025-07-12T00:08:58.263614610Z" level=error msg="Failed to destroy network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.264101 containerd[1440]: time="2025-07-12T00:08:58.263863429Z" level=error msg="encountered an error cleaning up failed sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.264249 containerd[1440]: time="2025-07-12T00:08:58.264211295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t657p,Uid:4163ae3e-c117-44b8-afd3-05959bb3dc8f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.264589 kubelet[2458]: E0712 00:08:58.264546 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.264662 kubelet[2458]: E0712 00:08:58.264607 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-t657p" Jul 12 00:08:58.264662 kubelet[2458]: E0712 00:08:58.264636 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-t657p" Jul 12 00:08:58.264721 kubelet[2458]: E0712 00:08:58.264674 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-t657p_calico-system(4163ae3e-c117-44b8-afd3-05959bb3dc8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-t657p_calico-system(4163ae3e-c117-44b8-afd3-05959bb3dc8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-t657p" podUID="4163ae3e-c117-44b8-afd3-05959bb3dc8f" Jul 12 00:08:58.265390 containerd[1440]: time="2025-07-12T00:08:58.265210171Z" level=error msg="encountered an error cleaning up failed sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.265390 containerd[1440]: time="2025-07-12T00:08:58.265256814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ltzl2,Uid:6eb7bd0f-4915-42cf-9421-dbbd624c0e64,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.265591 containerd[1440]: time="2025-07-12T00:08:58.265557157Z" level=error msg="encountered an error cleaning up failed sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.265642 containerd[1440]: time="2025-07-12T00:08:58.265607201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b95bdc6c4-c5hvb,Uid:59cf923d-026d-4707-961e-6d99594b8482,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.267561 kubelet[2458]: E0712 00:08:58.266684 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.267561 kubelet[2458]: E0712 00:08:58.266718 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.267561 kubelet[2458]: E0712 00:08:58.266732 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ltzl2" Jul 12 00:08:58.267561 kubelet[2458]: E0712 00:08:58.266749 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ltzl2" Jul 12 00:08:58.267725 kubelet[2458]: E0712 00:08:58.266755 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b95bdc6c4-c5hvb" Jul 12 00:08:58.267725 kubelet[2458]: E0712 00:08:58.266770 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b95bdc6c4-c5hvb" Jul 12 00:08:58.267725 kubelet[2458]: E0712 00:08:58.266781 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ltzl2_kube-system(6eb7bd0f-4915-42cf-9421-dbbd624c0e64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ltzl2_kube-system(6eb7bd0f-4915-42cf-9421-dbbd624c0e64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ltzl2" podUID="6eb7bd0f-4915-42cf-9421-dbbd624c0e64" Jul 12 00:08:58.267990 kubelet[2458]: E0712 00:08:58.266800 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b95bdc6c4-c5hvb_calico-system(59cf923d-026d-4707-961e-6d99594b8482)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b95bdc6c4-c5hvb_calico-system(59cf923d-026d-4707-961e-6d99594b8482)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b95bdc6c4-c5hvb" podUID="59cf923d-026d-4707-961e-6d99594b8482" Jul 12 00:08:58.269508 containerd[1440]: time="2025-07-12T00:08:58.269463452Z" level=error msg="Failed to destroy network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.269905 containerd[1440]: time="2025-07-12T00:08:58.269873844Z" level=error msg="encountered an error cleaning up failed sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.270034 containerd[1440]: time="2025-07-12T00:08:58.270010454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748bcb9cdf-s67t2,Uid:f1339e87-f0d0-41f4-9691-3d9be7937b47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.270354 kubelet[2458]: E0712 00:08:58.270316 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.270437 kubelet[2458]: E0712 00:08:58.270357 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748bcb9cdf-s67t2" Jul 12 00:08:58.270437 kubelet[2458]: E0712 00:08:58.270373 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748bcb9cdf-s67t2" Jul 12 00:08:58.270437 kubelet[2458]: E0712 00:08:58.270401 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-748bcb9cdf-s67t2_calico-system(f1339e87-f0d0-41f4-9691-3d9be7937b47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-748bcb9cdf-s67t2_calico-system(f1339e87-f0d0-41f4-9691-3d9be7937b47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-748bcb9cdf-s67t2" podUID="f1339e87-f0d0-41f4-9691-3d9be7937b47" Jul 12 00:08:58.272250 containerd[1440]: time="2025-07-12T00:08:58.272199099Z" level=error msg="Failed to destroy network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.272728 containerd[1440]: time="2025-07-12T00:08:58.272684256Z" level=error msg="encountered an error cleaning up failed sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.272866 containerd[1440]: time="2025-07-12T00:08:58.272743661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-mvr8g,Uid:de5af1cf-2d96-4cd0-b664-9eb849bca08f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.273388 containerd[1440]: time="2025-07-12T00:08:58.273259860Z" level=error msg="Failed to destroy network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.273489 kubelet[2458]: E0712 00:08:58.273348 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.273489 kubelet[2458]: E0712 00:08:58.273392 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd4964cc-mvr8g" Jul 12 00:08:58.273489 kubelet[2458]: E0712 00:08:58.273409 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd4964cc-mvr8g" Jul 12 00:08:58.273652 kubelet[2458]: E0712 00:08:58.273439 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd4964cc-mvr8g_calico-apiserver(de5af1cf-2d96-4cd0-b664-9eb849bca08f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd4964cc-mvr8g_calico-apiserver(de5af1cf-2d96-4cd0-b664-9eb849bca08f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd4964cc-mvr8g" podUID="de5af1cf-2d96-4cd0-b664-9eb849bca08f" Jul 12 00:08:58.273996 containerd[1440]: time="2025-07-12T00:08:58.273906909Z" level=error msg="encountered an error cleaning up failed sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.273996 containerd[1440]: time="2025-07-12T00:08:58.273959313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9x52,Uid:e479b3c4-2d61-4209-8ff7-602a3f90e035,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.274074 containerd[1440]: time="2025-07-12T00:08:58.274030078Z" level=error msg="Failed to destroy network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.274236 kubelet[2458]: E0712 00:08:58.274199 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.274236 kubelet[2458]: E0712 00:08:58.274231 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m9x52" Jul 12 00:08:58.274862 kubelet[2458]: E0712 00:08:58.274246 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m9x52" Jul 12 00:08:58.274862 kubelet[2458]: E0712 00:08:58.274271 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m9x52_kube-system(e479b3c4-2d61-4209-8ff7-602a3f90e035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m9x52_kube-system(e479b3c4-2d61-4209-8ff7-602a3f90e035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m9x52" podUID="e479b3c4-2d61-4209-8ff7-602a3f90e035" Jul 12 00:08:58.274862 kubelet[2458]: E0712 00:08:58.274851 2458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.275014 containerd[1440]: time="2025-07-12T00:08:58.274528396Z" level=error msg="encountered an error cleaning up failed sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.275014 containerd[1440]: time="2025-07-12T00:08:58.274571599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-p2r92,Uid:8de351fb-c5ee-4b34-82bf-ce57122f3ecf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.275090 kubelet[2458]: E0712 00:08:58.274886 2458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd4964cc-p2r92" Jul 12 00:08:58.275090 kubelet[2458]: E0712 00:08:58.274900 2458 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd4964cc-p2r92" Jul 12 00:08:58.275090 kubelet[2458]: E0712 00:08:58.274927 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd4964cc-p2r92_calico-apiserver(8de351fb-c5ee-4b34-82bf-ce57122f3ecf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd4964cc-p2r92_calico-apiserver(8de351fb-c5ee-4b34-82bf-ce57122f3ecf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd4964cc-p2r92" podUID="8de351fb-c5ee-4b34-82bf-ce57122f3ecf" Jul 12 00:08:58.620230 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd-shm.mount: Deactivated successfully. Jul 12 00:08:58.620333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045-shm.mount: Deactivated successfully. Jul 12 00:08:58.922943 containerd[1440]: time="2025-07-12T00:08:58.922835754Z" level=info msg="StopPodSandbox for \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\"" Jul 12 00:08:58.923289 containerd[1440]: time="2025-07-12T00:08:58.922998007Z" level=info msg="Ensure that sandbox 49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497 in task-service has been cleanup successfully" Jul 12 00:08:58.930371 kubelet[2458]: I0712 00:08:58.929656 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:08:58.930371 kubelet[2458]: I0712 00:08:58.929724 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:08:58.930371 kubelet[2458]: I0712 00:08:58.929738 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:08:58.930371 kubelet[2458]: I0712 00:08:58.929750 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:08:58.930371 kubelet[2458]: I0712 00:08:58.929760 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:08:58.930371 kubelet[2458]: I0712 00:08:58.929769 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:08:58.930371 kubelet[2458]: I0712 00:08:58.930353 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:08:58.933608 containerd[1440]: time="2025-07-12T00:08:58.932094175Z" level=info msg="StopPodSandbox for \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\"" Jul 12 00:08:58.933608 containerd[1440]: time="2025-07-12T00:08:58.932258387Z" level=info msg="Ensure that sandbox 3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd in task-service has been cleanup successfully" Jul 12 00:08:58.933608 containerd[1440]: time="2025-07-12T00:08:58.932515527Z" level=info msg="StopPodSandbox for \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\"" Jul 12 00:08:58.933608 containerd[1440]: time="2025-07-12T00:08:58.932659138Z" level=info msg="Ensure that sandbox e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a in task-service has been cleanup successfully" Jul 12 00:08:58.933608 containerd[1440]: time="2025-07-12T00:08:58.933206139Z" level=info msg="StopPodSandbox for \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\"" Jul 12 00:08:58.933608 containerd[1440]: time="2025-07-12T00:08:58.933365591Z" level=info msg="Ensure that sandbox 13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045 in task-service has been cleanup successfully" Jul 12 00:08:58.934009 containerd[1440]: time="2025-07-12T00:08:58.933984598Z" level=info msg="StopPodSandbox for \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\"" Jul 12 00:08:58.934187 containerd[1440]: time="2025-07-12T00:08:58.934167852Z" level=info msg="Ensure that sandbox 3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7 in task-service has been cleanup successfully" Jul 12 00:08:58.934395 containerd[1440]: time="2025-07-12T00:08:58.934348305Z" level=info msg="StopPodSandbox for \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\"" Jul 12 00:08:58.934544 containerd[1440]: time="2025-07-12T00:08:58.934518878Z" level=info msg="Ensure that sandbox 5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c in task-service has been cleanup successfully" Jul 12 00:08:58.936395 containerd[1440]: time="2025-07-12T00:08:58.936192645Z" level=info msg="StopPodSandbox for \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\"" Jul 12 00:08:58.939686 containerd[1440]: time="2025-07-12T00:08:58.937341092Z" level=info msg="Ensure that sandbox f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4 in task-service has been cleanup successfully" Jul 12 00:08:58.939686 containerd[1440]: time="2025-07-12T00:08:58.938405492Z" level=info msg="StopPodSandbox for \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\"" Jul 12 00:08:58.939686 containerd[1440]: time="2025-07-12T00:08:58.938556184Z" level=info msg="Ensure that sandbox fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a in task-service has been cleanup successfully" Jul 12 00:08:58.940018 kubelet[2458]: I0712 00:08:58.937904 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:08:58.996098 containerd[1440]: time="2025-07-12T00:08:58.996045452Z" level=error msg="StopPodSandbox for \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\" failed" error="failed to destroy network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.996497 kubelet[2458]: E0712 00:08:58.996451 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:08:58.996577 kubelet[2458]: E0712 00:08:58.996514 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c"} Jul 12 00:08:58.996605 kubelet[2458]: E0712 00:08:58.996574 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4163ae3e-c117-44b8-afd3-05959bb3dc8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:58.996605 kubelet[2458]: E0712 00:08:58.996594 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4163ae3e-c117-44b8-afd3-05959bb3dc8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-t657p" podUID="4163ae3e-c117-44b8-afd3-05959bb3dc8f" Jul 12 00:08:58.998952 containerd[1440]: time="2025-07-12T00:08:58.998909189Z" level=error msg="StopPodSandbox for \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\" failed" error="failed to destroy network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.999329 kubelet[2458]: E0712 00:08:58.999255 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:08:58.999398 kubelet[2458]: E0712 00:08:58.999338 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7"} Jul 12 00:08:58.999398 kubelet[2458]: E0712 00:08:58.999377 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6eb7bd0f-4915-42cf-9421-dbbd624c0e64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:58.999489 kubelet[2458]: E0712 00:08:58.999397 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6eb7bd0f-4915-42cf-9421-dbbd624c0e64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ltzl2" podUID="6eb7bd0f-4915-42cf-9421-dbbd624c0e64" Jul 12 00:08:58.999547 containerd[1440]: time="2025-07-12T00:08:58.999503434Z" level=error msg="StopPodSandbox for \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\" failed" error="failed to destroy network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:58.999651 kubelet[2458]: E0712 00:08:58.999625 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:08:58.999702 kubelet[2458]: E0712 00:08:58.999654 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497"} Jul 12 00:08:58.999702 kubelet[2458]: E0712 00:08:58.999689 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de5af1cf-2d96-4cd0-b664-9eb849bca08f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:58.999762 kubelet[2458]: E0712 00:08:58.999707 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de5af1cf-2d96-4cd0-b664-9eb849bca08f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd4964cc-mvr8g" podUID="de5af1cf-2d96-4cd0-b664-9eb849bca08f" Jul 12 00:08:59.008688 containerd[1440]: time="2025-07-12T00:08:59.008268556Z" level=error msg="StopPodSandbox for \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\" failed" error="failed to destroy network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:59.009137 kubelet[2458]: E0712 00:08:59.009088 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:08:59.009207 kubelet[2458]: E0712 00:08:59.009143 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045"} Jul 12 00:08:59.009207 kubelet[2458]: E0712 00:08:59.009180 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59cf923d-026d-4707-961e-6d99594b8482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:59.009207 kubelet[2458]: E0712 00:08:59.009200 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59cf923d-026d-4707-961e-6d99594b8482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b95bdc6c4-c5hvb" podUID="59cf923d-026d-4707-961e-6d99594b8482" Jul 12 00:08:59.012614 containerd[1440]: time="2025-07-12T00:08:59.012565189Z" level=error msg="StopPodSandbox for \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\" failed" error="failed to destroy network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:59.012863 kubelet[2458]: E0712 00:08:59.012799 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:08:59.012919 kubelet[2458]: E0712 00:08:59.012873 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4"} Jul 12 00:08:59.012919 kubelet[2458]: E0712 00:08:59.012911 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e479b3c4-2d61-4209-8ff7-602a3f90e035\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:59.013002 kubelet[2458]: E0712 00:08:59.012933 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e479b3c4-2d61-4209-8ff7-602a3f90e035\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m9x52" podUID="e479b3c4-2d61-4209-8ff7-602a3f90e035" Jul 12 00:08:59.017993 containerd[1440]: time="2025-07-12T00:08:59.017947580Z" level=error msg="StopPodSandbox for \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\" failed" error="failed to destroy network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:59.018519 kubelet[2458]: E0712 00:08:59.018311 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:08:59.018519 kubelet[2458]: E0712 00:08:59.018368 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a"} Jul 12 00:08:59.018519 kubelet[2458]: E0712 00:08:59.018398 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5145d8e7-900c-4ad8-a934-1061d118e33b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:59.018519 kubelet[2458]: E0712 00:08:59.018416 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5145d8e7-900c-4ad8-a934-1061d118e33b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lc77w" podUID="5145d8e7-900c-4ad8-a934-1061d118e33b" Jul 12 00:08:59.026248 kubelet[2458]: E0712 00:08:59.018814 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:08:59.026248 kubelet[2458]: E0712 00:08:59.018840 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd"} Jul 12 00:08:59.026248 kubelet[2458]: E0712 00:08:59.018861 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8de351fb-c5ee-4b34-82bf-ce57122f3ecf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:59.026248 kubelet[2458]: E0712 00:08:59.018896 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8de351fb-c5ee-4b34-82bf-ce57122f3ecf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd4964cc-p2r92" podUID="8de351fb-c5ee-4b34-82bf-ce57122f3ecf" Jul 12 00:08:59.026472 containerd[1440]: time="2025-07-12T00:08:59.018689874Z" level=error msg="StopPodSandbox for \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\" failed" error="failed to destroy network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:59.044221 containerd[1440]: time="2025-07-12T00:08:59.043142973Z" level=error msg="StopPodSandbox for \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\" failed" error="failed to destroy network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:59.044372 kubelet[2458]: E0712 00:08:59.043388 2458 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:08:59.044372 kubelet[2458]: E0712 00:08:59.043466 2458 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a"} Jul 12 00:08:59.044372 kubelet[2458]: E0712 00:08:59.043498 2458 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1339e87-f0d0-41f4-9691-3d9be7937b47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:59.044372 kubelet[2458]: E0712 00:08:59.043522 2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1339e87-f0d0-41f4-9691-3d9be7937b47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-748bcb9cdf-s67t2" podUID="f1339e87-f0d0-41f4-9691-3d9be7937b47" Jul 12 00:09:01.407603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531245653.mount: Deactivated successfully. Jul 12 00:09:01.604213 containerd[1440]: time="2025-07-12T00:09:01.604128262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:01.605179 containerd[1440]: time="2025-07-12T00:09:01.604990760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:09:01.606364 containerd[1440]: time="2025-07-12T00:09:01.606323930Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:01.608788 containerd[1440]: time="2025-07-12T00:09:01.608728332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:01.609440 containerd[1440]: time="2025-07-12T00:09:01.609263809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.675623487s" Jul 12 00:09:01.609440 containerd[1440]: time="2025-07-12T00:09:01.609333333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:09:01.637444 containerd[1440]: time="2025-07-12T00:09:01.637398027Z" level=info msg="CreateContainer within sandbox \"5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:09:01.651906 containerd[1440]: time="2025-07-12T00:09:01.651810759Z" level=info msg="CreateContainer within sandbox \"5f919077a71bf3b21b03e0df3303e1731944251bea65506a995eafafc205005b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3209e178028427272948f1529e6cb8f4969954aa00f108e50c36dc5dc3046fdb\"" Jul 12 00:09:01.652677 containerd[1440]: time="2025-07-12T00:09:01.652327914Z" level=info msg="StartContainer for \"3209e178028427272948f1529e6cb8f4969954aa00f108e50c36dc5dc3046fdb\"" Jul 12 00:09:01.709470 systemd[1]: Started cri-containerd-3209e178028427272948f1529e6cb8f4969954aa00f108e50c36dc5dc3046fdb.scope - libcontainer container 3209e178028427272948f1529e6cb8f4969954aa00f108e50c36dc5dc3046fdb. Jul 12 00:09:01.738623 containerd[1440]: time="2025-07-12T00:09:01.738566652Z" level=info msg="StartContainer for \"3209e178028427272948f1529e6cb8f4969954aa00f108e50c36dc5dc3046fdb\" returns successfully" Jul 12 00:09:02.020366 kubelet[2458]: I0712 00:09:02.019883 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p2v6g" podStartSLOduration=0.867022328 podStartE2EDuration="12.019855541s" podCreationTimestamp="2025-07-12 00:08:50 +0000 UTC" firstStartedPulling="2025-07-12 00:08:50.457187967 +0000 UTC m=+20.713423687" lastFinishedPulling="2025-07-12 00:09:01.61002114 +0000 UTC m=+31.866256900" observedRunningTime="2025-07-12 00:09:02.019475916 +0000 UTC m=+32.275711676" watchObservedRunningTime="2025-07-12 00:09:02.019855541 +0000 UTC m=+32.276091301" Jul 12 00:09:02.077054 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:09:02.077160 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:09:02.218503 containerd[1440]: time="2025-07-12T00:09:02.218460743Z" level=info msg="StopPodSandbox for \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\"" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.302 [INFO][3785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.303 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" iface="eth0" netns="/var/run/netns/cni-51d7a37c-753c-ef86-c3c0-123887231976" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.303 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" iface="eth0" netns="/var/run/netns/cni-51d7a37c-753c-ef86-c3c0-123887231976" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.305 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" iface="eth0" netns="/var/run/netns/cni-51d7a37c-753c-ef86-c3c0-123887231976" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.305 [INFO][3785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.305 [INFO][3785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.441 [INFO][3797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.441 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.441 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.451 [WARNING][3797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.451 [INFO][3797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.453 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:02.458099 containerd[1440]: 2025-07-12 00:09:02.456 [INFO][3785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:02.461182 containerd[1440]: time="2025-07-12T00:09:02.460816073Z" level=info msg="TearDown network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\" successfully" Jul 12 00:09:02.461182 containerd[1440]: time="2025-07-12T00:09:02.460845155Z" level=info msg="StopPodSandbox for \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\" returns successfully" Jul 12 00:09:02.460128 systemd[1]: run-netns-cni\x2d51d7a37c\x2d753c\x2def86\x2dc3c0\x2d123887231976.mount: Deactivated successfully. Jul 12 00:09:02.497046 kubelet[2458]: I0712 00:09:02.497008 2458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h52d6\" (UniqueName: \"kubernetes.io/projected/59cf923d-026d-4707-961e-6d99594b8482-kube-api-access-h52d6\") pod \"59cf923d-026d-4707-961e-6d99594b8482\" (UID: \"59cf923d-026d-4707-961e-6d99594b8482\") " Jul 12 00:09:02.497046 kubelet[2458]: I0712 00:09:02.497056 2458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59cf923d-026d-4707-961e-6d99594b8482-whisker-backend-key-pair\") pod \"59cf923d-026d-4707-961e-6d99594b8482\" (UID: \"59cf923d-026d-4707-961e-6d99594b8482\") " Jul 12 00:09:02.497205 kubelet[2458]: I0712 00:09:02.497094 2458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59cf923d-026d-4707-961e-6d99594b8482-whisker-ca-bundle\") pod \"59cf923d-026d-4707-961e-6d99594b8482\" (UID: \"59cf923d-026d-4707-961e-6d99594b8482\") " Jul 12 00:09:02.501798 kubelet[2458]: I0712 00:09:02.501759 2458 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59cf923d-026d-4707-961e-6d99594b8482-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "59cf923d-026d-4707-961e-6d99594b8482" (UID: "59cf923d-026d-4707-961e-6d99594b8482"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:09:02.512175 kubelet[2458]: I0712 00:09:02.511299 2458 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cf923d-026d-4707-961e-6d99594b8482-kube-api-access-h52d6" (OuterVolumeSpecName: "kube-api-access-h52d6") pod "59cf923d-026d-4707-961e-6d99594b8482" (UID: "59cf923d-026d-4707-961e-6d99594b8482"). InnerVolumeSpecName "kube-api-access-h52d6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:09:02.512175 kubelet[2458]: I0712 00:09:02.511800 2458 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cf923d-026d-4707-961e-6d99594b8482-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "59cf923d-026d-4707-961e-6d99594b8482" (UID: "59cf923d-026d-4707-961e-6d99594b8482"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:09:02.513762 systemd[1]: var-lib-kubelet-pods-59cf923d\x2d026d\x2d4707\x2d961e\x2d6d99594b8482-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh52d6.mount: Deactivated successfully. Jul 12 00:09:02.513881 systemd[1]: var-lib-kubelet-pods-59cf923d\x2d026d\x2d4707\x2d961e\x2d6d99594b8482-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:09:02.598131 kubelet[2458]: I0712 00:09:02.598071 2458 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59cf923d-026d-4707-961e-6d99594b8482-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 00:09:02.598131 kubelet[2458]: I0712 00:09:02.598116 2458 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59cf923d-026d-4707-961e-6d99594b8482-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 00:09:02.598131 kubelet[2458]: I0712 00:09:02.598126 2458 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h52d6\" (UniqueName: \"kubernetes.io/projected/59cf923d-026d-4707-961e-6d99594b8482-kube-api-access-h52d6\") on node \"localhost\" DevicePath \"\"" Jul 12 00:09:03.008567 systemd[1]: Removed slice kubepods-besteffort-pod59cf923d_026d_4707_961e_6d99594b8482.slice - libcontainer container kubepods-besteffort-pod59cf923d_026d_4707_961e_6d99594b8482.slice. Jul 12 00:09:03.075164 systemd[1]: Created slice kubepods-besteffort-pod137e79a7_2a86_48cf_8a69_234cb8fb3d49.slice - libcontainer container kubepods-besteffort-pod137e79a7_2a86_48cf_8a69_234cb8fb3d49.slice. Jul 12 00:09:03.100726 kubelet[2458]: I0712 00:09:03.100638 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/137e79a7-2a86-48cf-8a69-234cb8fb3d49-whisker-backend-key-pair\") pod \"whisker-59fc5b66db-qxrlf\" (UID: \"137e79a7-2a86-48cf-8a69-234cb8fb3d49\") " pod="calico-system/whisker-59fc5b66db-qxrlf" Jul 12 00:09:03.100726 kubelet[2458]: I0712 00:09:03.100681 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6q4\" (UniqueName: \"kubernetes.io/projected/137e79a7-2a86-48cf-8a69-234cb8fb3d49-kube-api-access-9r6q4\") pod \"whisker-59fc5b66db-qxrlf\" (UID: \"137e79a7-2a86-48cf-8a69-234cb8fb3d49\") " pod="calico-system/whisker-59fc5b66db-qxrlf" Jul 12 00:09:03.100726 kubelet[2458]: I0712 00:09:03.100717 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/137e79a7-2a86-48cf-8a69-234cb8fb3d49-whisker-ca-bundle\") pod \"whisker-59fc5b66db-qxrlf\" (UID: \"137e79a7-2a86-48cf-8a69-234cb8fb3d49\") " pod="calico-system/whisker-59fc5b66db-qxrlf" Jul 12 00:09:03.379685 containerd[1440]: time="2025-07-12T00:09:03.379060051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59fc5b66db-qxrlf,Uid:137e79a7-2a86-48cf-8a69-234cb8fb3d49,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:03.510336 systemd-networkd[1367]: calicdb73562f6e: Link UP Jul 12 00:09:03.511392 systemd-networkd[1367]: calicdb73562f6e: Gained carrier Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.411 [INFO][3841] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.433 [INFO][3841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59fc5b66db--qxrlf-eth0 whisker-59fc5b66db- calico-system 137e79a7-2a86-48cf-8a69-234cb8fb3d49 926 0 2025-07-12 00:09:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59fc5b66db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59fc5b66db-qxrlf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicdb73562f6e [] [] }} ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.433 [INFO][3841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.456 [INFO][3855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" HandleID="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Workload="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.456 [INFO][3855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" HandleID="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Workload="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a4600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59fc5b66db-qxrlf", "timestamp":"2025-07-12 00:09:03.456802735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.457 [INFO][3855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.457 [INFO][3855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.457 [INFO][3855] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.468 [INFO][3855] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.479 [INFO][3855] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.484 [INFO][3855] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.486 [INFO][3855] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.489 [INFO][3855] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.489 [INFO][3855] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.490 [INFO][3855] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.494 [INFO][3855] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.500 [INFO][3855] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.500 [INFO][3855] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" host="localhost" Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.500 [INFO][3855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:03.530149 containerd[1440]: 2025-07-12 00:09:03.500 [INFO][3855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" HandleID="k8s-pod-network.ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Workload="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" Jul 12 00:09:03.530741 containerd[1440]: 2025-07-12 00:09:03.503 [INFO][3841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59fc5b66db--qxrlf-eth0", GenerateName:"whisker-59fc5b66db-", Namespace:"calico-system", SelfLink:"", UID:"137e79a7-2a86-48cf-8a69-234cb8fb3d49", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59fc5b66db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59fc5b66db-qxrlf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicdb73562f6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:03.530741 containerd[1440]: 2025-07-12 00:09:03.503 [INFO][3841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" Jul 12 00:09:03.530741 containerd[1440]: 2025-07-12 00:09:03.503 [INFO][3841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdb73562f6e ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" Jul 12 00:09:03.530741 containerd[1440]: 2025-07-12 00:09:03.511 [INFO][3841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" Jul 12 00:09:03.530741 containerd[1440]: 2025-07-12 00:09:03.512 [INFO][3841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59fc5b66db--qxrlf-eth0", GenerateName:"whisker-59fc5b66db-", Namespace:"calico-system", SelfLink:"", UID:"137e79a7-2a86-48cf-8a69-234cb8fb3d49", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59fc5b66db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f", Pod:"whisker-59fc5b66db-qxrlf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicdb73562f6e", MAC:"aa:53:54:c8:74:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:03.530741 containerd[1440]: 2025-07-12 00:09:03.523 [INFO][3841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f" Namespace="calico-system" Pod="whisker-59fc5b66db-qxrlf" WorkloadEndpoint="localhost-k8s-whisker--59fc5b66db--qxrlf-eth0" Jul 12 00:09:03.571447 containerd[1440]: time="2025-07-12T00:09:03.571326690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:03.571594 containerd[1440]: time="2025-07-12T00:09:03.571400255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:03.575396 containerd[1440]: time="2025-07-12T00:09:03.574153907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:03.575396 containerd[1440]: time="2025-07-12T00:09:03.574294756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:03.606582 systemd[1]: Started cri-containerd-ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f.scope - libcontainer container ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f. Jul 12 00:09:03.632388 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:03.705721 containerd[1440]: time="2025-07-12T00:09:03.705643848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59fc5b66db-qxrlf,Uid:137e79a7-2a86-48cf-8a69-234cb8fb3d49,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f\"" Jul 12 00:09:03.709806 containerd[1440]: time="2025-07-12T00:09:03.708142805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:09:03.832746 kubelet[2458]: I0712 00:09:03.832692 2458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59cf923d-026d-4707-961e-6d99594b8482" path="/var/lib/kubelet/pods/59cf923d-026d-4707-961e-6d99594b8482/volumes" Jul 12 00:09:03.839510 kubelet[2458]: I0712 00:09:03.839461 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:03.839858 kubelet[2458]: E0712 00:09:03.839831 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:04.010444 kubelet[2458]: E0712 00:09:04.010065 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:04.669863 containerd[1440]: time="2025-07-12T00:09:04.669803814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:04.670342 containerd[1440]: time="2025-07-12T00:09:04.670302404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:09:04.671197 containerd[1440]: time="2025-07-12T00:09:04.671145495Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:04.673249 containerd[1440]: time="2025-07-12T00:09:04.673191740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:04.674441 containerd[1440]: time="2025-07-12T00:09:04.674104715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 965.898106ms" Jul 12 00:09:04.674441 containerd[1440]: time="2025-07-12T00:09:04.674147118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:09:04.677746 containerd[1440]: time="2025-07-12T00:09:04.677697693Z" level=info msg="CreateContainer within sandbox \"ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:09:04.689839 containerd[1440]: time="2025-07-12T00:09:04.689784947Z" level=info msg="CreateContainer within sandbox \"ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fa55c3c7b09e60c4d1c01826aed145c7793c789b8cbf453f9b077f1d0a9cf177\"" Jul 12 00:09:04.690581 containerd[1440]: time="2025-07-12T00:09:04.690534993Z" level=info msg="StartContainer for \"fa55c3c7b09e60c4d1c01826aed145c7793c789b8cbf453f9b077f1d0a9cf177\"" Jul 12 00:09:04.730501 systemd[1]: Started cri-containerd-fa55c3c7b09e60c4d1c01826aed145c7793c789b8cbf453f9b077f1d0a9cf177.scope - libcontainer container fa55c3c7b09e60c4d1c01826aed145c7793c789b8cbf453f9b077f1d0a9cf177. Jul 12 00:09:04.784963 containerd[1440]: time="2025-07-12T00:09:04.784914524Z" level=info msg="StartContainer for \"fa55c3c7b09e60c4d1c01826aed145c7793c789b8cbf453f9b077f1d0a9cf177\" returns successfully" Jul 12 00:09:04.786539 containerd[1440]: time="2025-07-12T00:09:04.786499700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:09:04.948324 kernel: bpftool[4144]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:09:05.109095 systemd-networkd[1367]: vxlan.calico: Link UP Jul 12 00:09:05.109104 systemd-networkd[1367]: vxlan.calico: Gained carrier Jul 12 00:09:05.151451 systemd-networkd[1367]: calicdb73562f6e: Gained IPv6LL Jul 12 00:09:06.053728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223096541.mount: Deactivated successfully. Jul 12 00:09:06.083756 containerd[1440]: time="2025-07-12T00:09:06.083689099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:06.085483 containerd[1440]: time="2025-07-12T00:09:06.085441438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:09:06.086705 containerd[1440]: time="2025-07-12T00:09:06.086677029Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:06.091487 containerd[1440]: time="2025-07-12T00:09:06.091442780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:06.092303 containerd[1440]: time="2025-07-12T00:09:06.092256786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.305723205s" Jul 12 00:09:06.092347 containerd[1440]: time="2025-07-12T00:09:06.092307509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:09:06.098080 containerd[1440]: time="2025-07-12T00:09:06.098040195Z" level=info msg="CreateContainer within sandbox \"ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:09:06.138677 containerd[1440]: time="2025-07-12T00:09:06.138623704Z" level=info msg="CreateContainer within sandbox \"ed7adfe18d96a05005ae0d05d52bfffde80fc3f6b284c15c0dfa871d83184e9f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c66198849522c933eb59eb65b3019cd96519d09353151a835a0bcaae5c456986\"" Jul 12 00:09:06.139271 containerd[1440]: time="2025-07-12T00:09:06.139174296Z" level=info msg="StartContainer for \"c66198849522c933eb59eb65b3019cd96519d09353151a835a0bcaae5c456986\"" Jul 12 00:09:06.189507 systemd[1]: Started cri-containerd-c66198849522c933eb59eb65b3019cd96519d09353151a835a0bcaae5c456986.scope - libcontainer container c66198849522c933eb59eb65b3019cd96519d09353151a835a0bcaae5c456986. Jul 12 00:09:06.224472 containerd[1440]: time="2025-07-12T00:09:06.224412506Z" level=info msg="StartContainer for \"c66198849522c933eb59eb65b3019cd96519d09353151a835a0bcaae5c456986\" returns successfully" Jul 12 00:09:06.558401 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Jul 12 00:09:10.830755 containerd[1440]: time="2025-07-12T00:09:10.830622871Z" level=info msg="StopPodSandbox for \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\"" Jul 12 00:09:10.831791 containerd[1440]: time="2025-07-12T00:09:10.831735607Z" level=info msg="StopPodSandbox for \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\"" Jul 12 00:09:10.832099 containerd[1440]: time="2025-07-12T00:09:10.831959019Z" level=info msg="StopPodSandbox for \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\"" Jul 12 00:09:10.834044 containerd[1440]: time="2025-07-12T00:09:10.832913147Z" level=info msg="StopPodSandbox for \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\"" Jul 12 00:09:10.917373 kubelet[2458]: I0712 00:09:10.917145 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59fc5b66db-qxrlf" podStartSLOduration=5.531869759 podStartE2EDuration="7.917123567s" podCreationTimestamp="2025-07-12 00:09:03 +0000 UTC" firstStartedPulling="2025-07-12 00:09:03.707780382 +0000 UTC m=+33.964016142" lastFinishedPulling="2025-07-12 00:09:06.09303419 +0000 UTC m=+36.349269950" observedRunningTime="2025-07-12 00:09:07.039764711 +0000 UTC m=+37.296000471" watchObservedRunningTime="2025-07-12 00:09:10.917123567 +0000 UTC m=+41.173359367" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.917 [INFO][4322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.917 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" iface="eth0" netns="/var/run/netns/cni-e6b52534-c702-88ea-b491-7a534ccf1797" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.918 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" iface="eth0" netns="/var/run/netns/cni-e6b52534-c702-88ea-b491-7a534ccf1797" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.918 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" iface="eth0" netns="/var/run/netns/cni-e6b52534-c702-88ea-b491-7a534ccf1797" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.918 [INFO][4322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.918 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.964 [INFO][4355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.964 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.964 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.974 [WARNING][4355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.974 [INFO][4355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.978 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:10.983552 containerd[1440]: 2025-07-12 00:09:10.980 [INFO][4322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:10.986240 containerd[1440]: time="2025-07-12T00:09:10.983728657Z" level=info msg="TearDown network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\" successfully" Jul 12 00:09:10.986240 containerd[1440]: time="2025-07-12T00:09:10.983765419Z" level=info msg="StopPodSandbox for \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\" returns successfully" Jul 12 00:09:10.987868 containerd[1440]: time="2025-07-12T00:09:10.987810183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t657p,Uid:4163ae3e-c117-44b8-afd3-05959bb3dc8f,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:10.987914 systemd[1]: run-netns-cni\x2de6b52534\x2dc702\x2d88ea\x2db491\x2d7a534ccf1797.mount: Deactivated successfully. Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.936 [INFO][4328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.936 [INFO][4328] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" iface="eth0" netns="/var/run/netns/cni-d1c6a12a-85b8-7321-3e8f-553ba8d0a61b" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.937 [INFO][4328] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" iface="eth0" netns="/var/run/netns/cni-d1c6a12a-85b8-7321-3e8f-553ba8d0a61b" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.937 [INFO][4328] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" iface="eth0" netns="/var/run/netns/cni-d1c6a12a-85b8-7321-3e8f-553ba8d0a61b" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.937 [INFO][4328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.937 [INFO][4328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.966 [INFO][4365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.966 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.978 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.993 [WARNING][4365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.993 [INFO][4365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:10.995 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.009665 containerd[1440]: 2025-07-12 00:09:11.003 [INFO][4328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:11.013632 containerd[1440]: time="2025-07-12T00:09:11.013575990Z" level=info msg="TearDown network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\" successfully" Jul 12 00:09:11.013632 containerd[1440]: time="2025-07-12T00:09:11.013620072Z" level=info msg="StopPodSandbox for \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\" returns successfully" Jul 12 00:09:11.014495 systemd[1]: run-netns-cni\x2dd1c6a12a\x2d85b8\x2d7321\x2d3e8f\x2d553ba8d0a61b.mount: Deactivated successfully. Jul 12 00:09:11.014821 containerd[1440]: time="2025-07-12T00:09:11.014777889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-mvr8g,Uid:de5af1cf-2d96-4cd0-b664-9eb849bca08f,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.932 [INFO][4341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.932 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" iface="eth0" netns="/var/run/netns/cni-ea6e7f2f-6ccb-5055-ef50-885148f798a9" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.932 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" iface="eth0" netns="/var/run/netns/cni-ea6e7f2f-6ccb-5055-ef50-885148f798a9" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.932 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" iface="eth0" netns="/var/run/netns/cni-ea6e7f2f-6ccb-5055-ef50-885148f798a9" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.933 [INFO][4341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.934 [INFO][4341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.983 [INFO][4363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.983 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:10.996 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:11.010 [WARNING][4363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:11.010 [INFO][4363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:11.018 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.024822 containerd[1440]: 2025-07-12 00:09:11.020 [INFO][4341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:11.026622 systemd[1]: run-netns-cni\x2dea6e7f2f\x2d6ccb\x2d5055\x2def50\x2d885148f798a9.mount: Deactivated successfully. Jul 12 00:09:11.028431 containerd[1440]: time="2025-07-12T00:09:11.028390880Z" level=info msg="TearDown network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\" successfully" Jul 12 00:09:11.028566 containerd[1440]: time="2025-07-12T00:09:11.028549888Z" level=info msg="StopPodSandbox for \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\" returns successfully" Jul 12 00:09:11.029566 containerd[1440]: time="2025-07-12T00:09:11.029522816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-p2r92,Uid:8de351fb-c5ee-4b34-82bf-ce57122f3ecf,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:10.961 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:10.961 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" iface="eth0" netns="/var/run/netns/cni-5112fa1b-325a-4e2d-29c9-7a11322ce963" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:10.961 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" iface="eth0" netns="/var/run/netns/cni-5112fa1b-325a-4e2d-29c9-7a11322ce963" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:10.962 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" iface="eth0" netns="/var/run/netns/cni-5112fa1b-325a-4e2d-29c9-7a11322ce963" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:10.962 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:10.962 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:11.024 [INFO][4381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:11.029 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:11.029 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:11.042 [WARNING][4381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:11.042 [INFO][4381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:11.047 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.053613 containerd[1440]: 2025-07-12 00:09:11.049 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:11.054256 containerd[1440]: time="2025-07-12T00:09:11.054225992Z" level=info msg="TearDown network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\" successfully" Jul 12 00:09:11.054393 containerd[1440]: time="2025-07-12T00:09:11.054369199Z" level=info msg="StopPodSandbox for \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\" returns successfully" Jul 12 00:09:11.054955 kubelet[2458]: E0712 00:09:11.054758 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:11.055123 containerd[1440]: time="2025-07-12T00:09:11.055082915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9x52,Uid:e479b3c4-2d61-4209-8ff7-602a3f90e035,Namespace:kube-system,Attempt:1,}" Jul 12 00:09:11.187538 systemd-networkd[1367]: califd375581a38: Link UP Jul 12 00:09:11.187949 systemd-networkd[1367]: califd375581a38: Gained carrier Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.074 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--t657p-eth0 goldmane-768f4c5c69- calico-system 4163ae3e-c117-44b8-afd3-05959bb3dc8f 972 0 2025-07-12 00:08:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-t657p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califd375581a38 [] [] }} ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.075 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.110 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" HandleID="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.110 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" HandleID="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000595940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-t657p", "timestamp":"2025-07-12 00:09:11.110385438 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.115 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.115 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.115 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.131 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.139 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.148 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.151 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.155 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.156 [INFO][4430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.165 [INFO][4430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554 Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.172 [INFO][4430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.181 [INFO][4430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.181 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" host="localhost" Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.181 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.211019 containerd[1440]: 2025-07-12 00:09:11.181 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" HandleID="k8s-pod-network.58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:11.211871 containerd[1440]: 2025-07-12 00:09:11.185 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--t657p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4163ae3e-c117-44b8-afd3-05959bb3dc8f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-t657p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd375581a38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.211871 containerd[1440]: 2025-07-12 00:09:11.185 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:11.211871 containerd[1440]: 2025-07-12 00:09:11.185 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd375581a38 ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:11.211871 containerd[1440]: 2025-07-12 00:09:11.188 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:11.211871 containerd[1440]: 2025-07-12 00:09:11.190 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--t657p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4163ae3e-c117-44b8-afd3-05959bb3dc8f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554", Pod:"goldmane-768f4c5c69-t657p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd375581a38", MAC:"42:07:8f:af:e8:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.211871 containerd[1440]: 2025-07-12 00:09:11.205 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554" Namespace="calico-system" Pod="goldmane-768f4c5c69-t657p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:11.235765 containerd[1440]: time="2025-07-12T00:09:11.235253268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:11.235765 containerd[1440]: time="2025-07-12T00:09:11.235729211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:11.235765 containerd[1440]: time="2025-07-12T00:09:11.235742932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.236099 containerd[1440]: time="2025-07-12T00:09:11.235847457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.260608 systemd[1]: Started cri-containerd-58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554.scope - libcontainer container 58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554. Jul 12 00:09:11.276350 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:11.281451 systemd-networkd[1367]: cali44461e75400: Link UP Jul 12 00:09:11.282510 systemd-networkd[1367]: cali44461e75400: Gained carrier Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.098 [INFO][4404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0 calico-apiserver-dd4964cc- calico-apiserver de5af1cf-2d96-4cd0-b664-9eb849bca08f 974 0 2025-07-12 00:08:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd4964cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dd4964cc-mvr8g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali44461e75400 [] [] }} ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.098 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.141 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" HandleID="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.141 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" HandleID="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dd4964cc-mvr8g", "timestamp":"2025-07-12 00:09:11.141443488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.141 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.181 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.182 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.229 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.239 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.248 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.251 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.254 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.254 [INFO][4454] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.258 [INFO][4454] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.262 [INFO][4454] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.273 [INFO][4454] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.273 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" host="localhost" Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.273 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.307919 containerd[1440]: 2025-07-12 00:09:11.273 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" HandleID="k8s-pod-network.5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.308494 containerd[1440]: 2025-07-12 00:09:11.277 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"de5af1cf-2d96-4cd0-b664-9eb849bca08f", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dd4964cc-mvr8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44461e75400", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.308494 containerd[1440]: 2025-07-12 00:09:11.277 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.308494 containerd[1440]: 2025-07-12 00:09:11.277 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44461e75400 ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.308494 containerd[1440]: 2025-07-12 00:09:11.283 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.308494 containerd[1440]: 2025-07-12 00:09:11.284 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"de5af1cf-2d96-4cd0-b664-9eb849bca08f", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e", Pod:"calico-apiserver-dd4964cc-mvr8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44461e75400", MAC:"5e:cc:a9:32:ff:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.308494 containerd[1440]: 2025-07-12 00:09:11.302 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-mvr8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:11.317571 containerd[1440]: time="2025-07-12T00:09:11.317517480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t657p,Uid:4163ae3e-c117-44b8-afd3-05959bb3dc8f,Namespace:calico-system,Attempt:1,} returns sandbox id \"58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554\"" Jul 12 00:09:11.319214 containerd[1440]: time="2025-07-12T00:09:11.319180842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:09:11.332824 containerd[1440]: time="2025-07-12T00:09:11.332686667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:11.332824 containerd[1440]: time="2025-07-12T00:09:11.332750990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:11.332824 containerd[1440]: time="2025-07-12T00:09:11.332766911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.333095 containerd[1440]: time="2025-07-12T00:09:11.332863395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.355798 systemd[1]: Started cri-containerd-5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e.scope - libcontainer container 5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e. Jul 12 00:09:11.371880 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:11.397816 containerd[1440]: time="2025-07-12T00:09:11.397766072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-mvr8g,Uid:de5af1cf-2d96-4cd0-b664-9eb849bca08f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e\"" Jul 12 00:09:11.413898 systemd-networkd[1367]: caliacf6100f001: Link UP Jul 12 00:09:11.414318 systemd-networkd[1367]: caliacf6100f001: Gained carrier Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.128 [INFO][4417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0 calico-apiserver-dd4964cc- calico-apiserver 8de351fb-c5ee-4b34-82bf-ce57122f3ecf 973 0 2025-07-12 00:08:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd4964cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dd4964cc-p2r92 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliacf6100f001 [] [] }} ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.128 [INFO][4417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.173 [INFO][4464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" HandleID="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.174 [INFO][4464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" HandleID="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000492100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dd4964cc-p2r92", "timestamp":"2025-07-12 00:09:11.173963649 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.174 [INFO][4464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.274 [INFO][4464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.274 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.328 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.353 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.359 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.362 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.365 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.365 [INFO][4464] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.370 [INFO][4464] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37 Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.400 [INFO][4464] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.406 [INFO][4464] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.406 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" host="localhost" Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.406 [INFO][4464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.434887 containerd[1440]: 2025-07-12 00:09:11.406 [INFO][4464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" HandleID="k8s-pod-network.4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.435841 containerd[1440]: 2025-07-12 00:09:11.411 [INFO][4417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8de351fb-c5ee-4b34-82bf-ce57122f3ecf", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dd4964cc-p2r92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacf6100f001", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.435841 containerd[1440]: 2025-07-12 00:09:11.411 [INFO][4417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.435841 containerd[1440]: 2025-07-12 00:09:11.411 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacf6100f001 ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.435841 containerd[1440]: 2025-07-12 00:09:11.414 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.435841 containerd[1440]: 2025-07-12 00:09:11.418 [INFO][4417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8de351fb-c5ee-4b34-82bf-ce57122f3ecf", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37", Pod:"calico-apiserver-dd4964cc-p2r92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacf6100f001", MAC:"26:d7:d9:58:fd:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.435841 containerd[1440]: 2025-07-12 00:09:11.432 [INFO][4417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37" Namespace="calico-apiserver" Pod="calico-apiserver-dd4964cc-p2r92" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:11.460910 containerd[1440]: time="2025-07-12T00:09:11.452980111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:11.461565 containerd[1440]: time="2025-07-12T00:09:11.461499411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:11.461565 containerd[1440]: time="2025-07-12T00:09:11.461538893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.461750 containerd[1440]: time="2025-07-12T00:09:11.461650178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.484518 systemd[1]: Started cri-containerd-4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37.scope - libcontainer container 4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37. Jul 12 00:09:11.496942 systemd-networkd[1367]: calia165a6cfb8b: Link UP Jul 12 00:09:11.497468 systemd-networkd[1367]: calia165a6cfb8b: Gained carrier Jul 12 00:09:11.502594 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.139 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m9x52-eth0 coredns-668d6bf9bc- kube-system e479b3c4-2d61-4209-8ff7-602a3f90e035 975 0 2025-07-12 00:08:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m9x52 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia165a6cfb8b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.139 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.184 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" HandleID="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.184 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" HandleID="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m9x52", "timestamp":"2025-07-12 00:09:11.184681897 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.185 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.406 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.406 [INFO][4471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.429 [INFO][4471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.455 [INFO][4471] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.463 [INFO][4471] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.465 [INFO][4471] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.468 [INFO][4471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.468 [INFO][4471] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.471 [INFO][4471] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45 Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.482 [INFO][4471] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.490 [INFO][4471] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.490 [INFO][4471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" host="localhost" Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.490 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.523092 containerd[1440]: 2025-07-12 00:09:11.490 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" HandleID="k8s-pod-network.f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.525456 containerd[1440]: 2025-07-12 00:09:11.493 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9x52-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e479b3c4-2d61-4209-8ff7-602a3f90e035", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m9x52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia165a6cfb8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.525456 containerd[1440]: 2025-07-12 00:09:11.493 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.525456 containerd[1440]: 2025-07-12 00:09:11.493 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia165a6cfb8b ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.525456 containerd[1440]: 2025-07-12 00:09:11.498 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.525456 containerd[1440]: 2025-07-12 00:09:11.500 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9x52-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e479b3c4-2d61-4209-8ff7-602a3f90e035", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45", Pod:"coredns-668d6bf9bc-m9x52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia165a6cfb8b", MAC:"3a:a8:65:22:c7:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:11.525456 containerd[1440]: 2025-07-12 00:09:11.514 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9x52" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:11.538450 containerd[1440]: time="2025-07-12T00:09:11.538399438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4964cc-p2r92,Uid:8de351fb-c5ee-4b34-82bf-ce57122f3ecf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37\"" Jul 12 00:09:11.555476 containerd[1440]: time="2025-07-12T00:09:11.555338512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:11.555476 containerd[1440]: time="2025-07-12T00:09:11.555431677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:11.555476 containerd[1440]: time="2025-07-12T00:09:11.555443798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.555831 containerd[1440]: time="2025-07-12T00:09:11.555607006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.573525 systemd[1]: Started cri-containerd-f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45.scope - libcontainer container f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45. Jul 12 00:09:11.584890 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:11.604956 containerd[1440]: time="2025-07-12T00:09:11.604188438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9x52,Uid:e479b3c4-2d61-4209-8ff7-602a3f90e035,Namespace:kube-system,Attempt:1,} returns sandbox id \"f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45\"" Jul 12 00:09:11.605086 kubelet[2458]: E0712 00:09:11.604939 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:11.611335 containerd[1440]: time="2025-07-12T00:09:11.610726720Z" level=info msg="CreateContainer within sandbox \"f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:09:11.656841 containerd[1440]: time="2025-07-12T00:09:11.656790989Z" level=info msg="CreateContainer within sandbox \"f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af0cec5b1403e5a511f4a0775f212fdbfbb8f223fa83315a5356c0a17a6ffd41\"" Jul 12 00:09:11.658376 containerd[1440]: time="2025-07-12T00:09:11.658324305Z" level=info msg="StartContainer for \"af0cec5b1403e5a511f4a0775f212fdbfbb8f223fa83315a5356c0a17a6ffd41\"" Jul 12 00:09:11.686551 systemd[1]: Started cri-containerd-af0cec5b1403e5a511f4a0775f212fdbfbb8f223fa83315a5356c0a17a6ffd41.scope - libcontainer container af0cec5b1403e5a511f4a0775f212fdbfbb8f223fa83315a5356c0a17a6ffd41. Jul 12 00:09:11.741450 containerd[1440]: time="2025-07-12T00:09:11.741328753Z" level=info msg="StartContainer for \"af0cec5b1403e5a511f4a0775f212fdbfbb8f223fa83315a5356c0a17a6ffd41\" returns successfully" Jul 12 00:09:11.830241 containerd[1440]: time="2025-07-12T00:09:11.830192529Z" level=info msg="StopPodSandbox for \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\"" Jul 12 00:09:11.830464 containerd[1440]: time="2025-07-12T00:09:11.830435621Z" level=info msg="StopPodSandbox for \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\"" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.909 [INFO][4738] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.909 [INFO][4738] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" iface="eth0" netns="/var/run/netns/cni-49f49a29-2e6b-207c-e425-35ad64ea4a3d" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.910 [INFO][4738] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" iface="eth0" netns="/var/run/netns/cni-49f49a29-2e6b-207c-e425-35ad64ea4a3d" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.911 [INFO][4738] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" iface="eth0" netns="/var/run/netns/cni-49f49a29-2e6b-207c-e425-35ad64ea4a3d" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.911 [INFO][4738] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.911 [INFO][4738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.966 [INFO][4756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.966 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.966 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.978 [WARNING][4756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.978 [INFO][4756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.983 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:11.998928 containerd[1440]: 2025-07-12 00:09:11.994 [INFO][4738] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:11.998769 systemd[1]: run-netns-cni\x2d5112fa1b\x2d325a\x2d4e2d\x2d29c9\x2d7a11322ce963.mount: Deactivated successfully. Jul 12 00:09:12.003453 systemd[1]: run-netns-cni\x2d49f49a29\x2d2e6b\x2d207c\x2de425\x2d35ad64ea4a3d.mount: Deactivated successfully. Jul 12 00:09:12.005483 containerd[1440]: time="2025-07-12T00:09:12.005428274Z" level=info msg="TearDown network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\" successfully" Jul 12 00:09:12.005609 containerd[1440]: time="2025-07-12T00:09:12.005505238Z" level=info msg="StopPodSandbox for \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\" returns successfully" Jul 12 00:09:12.007140 containerd[1440]: time="2025-07-12T00:09:12.007070433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc77w,Uid:5145d8e7-900c-4ad8-a934-1061d118e33b,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.919 [INFO][4743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.919 [INFO][4743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" iface="eth0" netns="/var/run/netns/cni-b00eb186-dad0-cc94-de21-d0b80e67c361" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.919 [INFO][4743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" iface="eth0" netns="/var/run/netns/cni-b00eb186-dad0-cc94-de21-d0b80e67c361" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.921 [INFO][4743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" iface="eth0" netns="/var/run/netns/cni-b00eb186-dad0-cc94-de21-d0b80e67c361" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.921 [INFO][4743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.921 [INFO][4743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.971 [INFO][4762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.971 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:11.983 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:12.009 [WARNING][4762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:12.010 [INFO][4762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:12.018 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:12.023341 containerd[1440]: 2025-07-12 00:09:12.021 [INFO][4743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:12.024485 containerd[1440]: time="2025-07-12T00:09:12.024431147Z" level=info msg="TearDown network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\" successfully" Jul 12 00:09:12.024485 containerd[1440]: time="2025-07-12T00:09:12.024476149Z" level=info msg="StopPodSandbox for \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\" returns successfully" Jul 12 00:09:12.028492 containerd[1440]: time="2025-07-12T00:09:12.028228849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748bcb9cdf-s67t2,Uid:f1339e87-f0d0-41f4-9691-3d9be7937b47,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:12.028670 systemd[1]: run-netns-cni\x2db00eb186\x2ddad0\x2dcc94\x2dde21\x2dd0b80e67c361.mount: Deactivated successfully. Jul 12 00:09:12.080894 kubelet[2458]: E0712 00:09:12.080116 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:12.103653 kubelet[2458]: I0712 00:09:12.103567 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m9x52" podStartSLOduration=36.103545464 podStartE2EDuration="36.103545464s" podCreationTimestamp="2025-07-12 00:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:09:12.103087882 +0000 UTC m=+42.359323642" watchObservedRunningTime="2025-07-12 00:09:12.103545464 +0000 UTC m=+42.359781224" Jul 12 00:09:12.343197 systemd-networkd[1367]: calia58214b041f: Link UP Jul 12 00:09:12.343930 systemd-networkd[1367]: calia58214b041f: Gained carrier Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.095 [INFO][4773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lc77w-eth0 csi-node-driver- calico-system 5145d8e7-900c-4ad8-a934-1061d118e33b 1000 0 2025-07-12 00:08:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lc77w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia58214b041f [] [] }} ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.095 [INFO][4773] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.152 [INFO][4800] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" HandleID="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.154 [INFO][4800] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" HandleID="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lc77w", "timestamp":"2025-07-12 00:09:12.152161077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.154 [INFO][4800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.154 [INFO][4800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.154 [INFO][4800] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.180 [INFO][4800] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.279 [INFO][4800] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.289 [INFO][4800] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.296 [INFO][4800] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.306 [INFO][4800] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.306 [INFO][4800] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.310 [INFO][4800] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105 Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.329 [INFO][4800] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.337 [INFO][4800] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.337 [INFO][4800] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" host="localhost" Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.337 [INFO][4800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:12.373520 containerd[1440]: 2025-07-12 00:09:12.337 [INFO][4800] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" HandleID="k8s-pod-network.e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:12.374238 containerd[1440]: 2025-07-12 00:09:12.341 [INFO][4773] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lc77w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5145d8e7-900c-4ad8-a934-1061d118e33b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lc77w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia58214b041f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:12.374238 containerd[1440]: 2025-07-12 00:09:12.341 [INFO][4773] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:12.374238 containerd[1440]: 2025-07-12 00:09:12.341 [INFO][4773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia58214b041f ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:12.374238 containerd[1440]: 2025-07-12 00:09:12.345 [INFO][4773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:12.374238 containerd[1440]: 2025-07-12 00:09:12.345 [INFO][4773] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lc77w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5145d8e7-900c-4ad8-a934-1061d118e33b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105", Pod:"csi-node-driver-lc77w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia58214b041f", MAC:"be:ad:a4:b1:46:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:12.374238 containerd[1440]: 2025-07-12 00:09:12.363 [INFO][4773] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105" Namespace="calico-system" Pod="csi-node-driver-lc77w" WorkloadEndpoint="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:12.434409 containerd[1440]: time="2025-07-12T00:09:12.427163276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:12.434409 containerd[1440]: time="2025-07-12T00:09:12.428521221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:12.434409 containerd[1440]: time="2025-07-12T00:09:12.428563823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:12.434409 containerd[1440]: time="2025-07-12T00:09:12.428706950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:12.465764 systemd[1]: Started cri-containerd-e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105.scope - libcontainer container e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105. Jul 12 00:09:12.469505 systemd-networkd[1367]: cali7d42ca79d07: Link UP Jul 12 00:09:12.473003 systemd-networkd[1367]: cali7d42ca79d07: Gained carrier Jul 12 00:09:12.511407 systemd-networkd[1367]: caliacf6100f001: Gained IPv6LL Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.157 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0 calico-kube-controllers-748bcb9cdf- calico-system f1339e87-f0d0-41f4-9691-3d9be7937b47 1001 0 2025-07-12 00:08:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:748bcb9cdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-748bcb9cdf-s67t2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7d42ca79d07 [] [] }} ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.157 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.206 [INFO][4814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" HandleID="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.206 [INFO][4814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" HandleID="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-748bcb9cdf-s67t2", "timestamp":"2025-07-12 00:09:12.20619339 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.206 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.337 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.337 [INFO][4814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.358 [INFO][4814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.379 [INFO][4814] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.397 [INFO][4814] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.401 [INFO][4814] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.410 [INFO][4814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.411 [INFO][4814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.416 [INFO][4814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7 Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.426 [INFO][4814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.442 [INFO][4814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.442 [INFO][4814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" host="localhost" Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.442 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:12.518065 containerd[1440]: 2025-07-12 00:09:12.442 [INFO][4814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" HandleID="k8s-pod-network.77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.518758 containerd[1440]: 2025-07-12 00:09:12.449 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0", GenerateName:"calico-kube-controllers-748bcb9cdf-", Namespace:"calico-system", SelfLink:"", UID:"f1339e87-f0d0-41f4-9691-3d9be7937b47", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"748bcb9cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-748bcb9cdf-s67t2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d42ca79d07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:12.518758 containerd[1440]: 2025-07-12 00:09:12.449 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.518758 containerd[1440]: 2025-07-12 00:09:12.449 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d42ca79d07 ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.518758 containerd[1440]: 2025-07-12 00:09:12.474 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.518758 containerd[1440]: 2025-07-12 00:09:12.477 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0", GenerateName:"calico-kube-controllers-748bcb9cdf-", Namespace:"calico-system", SelfLink:"", UID:"f1339e87-f0d0-41f4-9691-3d9be7937b47", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"748bcb9cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7", Pod:"calico-kube-controllers-748bcb9cdf-s67t2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d42ca79d07", MAC:"7a:64:e3:03:16:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:12.518758 containerd[1440]: 2025-07-12 00:09:12.510 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7" Namespace="calico-system" Pod="calico-kube-controllers-748bcb9cdf-s67t2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:12.569637 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:12.580016 containerd[1440]: time="2025-07-12T00:09:12.579839124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:12.580403 containerd[1440]: time="2025-07-12T00:09:12.580335907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:12.581125 containerd[1440]: time="2025-07-12T00:09:12.580872253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:12.581125 containerd[1440]: time="2025-07-12T00:09:12.581071623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:12.617735 containerd[1440]: time="2025-07-12T00:09:12.617387246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc77w,Uid:5145d8e7-900c-4ad8-a934-1061d118e33b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105\"" Jul 12 00:09:12.640516 systemd[1]: Started cri-containerd-77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7.scope - libcontainer container 77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7. Jul 12 00:09:12.659460 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:12.683420 containerd[1440]: time="2025-07-12T00:09:12.683378693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748bcb9cdf-s67t2,Uid:f1339e87-f0d0-41f4-9691-3d9be7937b47,Namespace:calico-system,Attempt:1,} returns sandbox id \"77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7\"" Jul 12 00:09:12.894476 systemd-networkd[1367]: califd375581a38: Gained IPv6LL Jul 12 00:09:13.086529 kubelet[2458]: E0712 00:09:13.086498 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:13.214530 systemd-networkd[1367]: calia165a6cfb8b: Gained IPv6LL Jul 12 00:09:13.278484 systemd-networkd[1367]: cali44461e75400: Gained IPv6LL Jul 12 00:09:13.368364 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:56690.service - OpenSSH per-connection server daemon (10.0.0.1:56690). Jul 12 00:09:13.395795 containerd[1440]: time="2025-07-12T00:09:13.395734539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:13.397135 containerd[1440]: time="2025-07-12T00:09:13.397085122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:09:13.399653 containerd[1440]: time="2025-07-12T00:09:13.399606040Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:13.403070 containerd[1440]: time="2025-07-12T00:09:13.403021360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:13.404026 containerd[1440]: time="2025-07-12T00:09:13.403933362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.084716079s" Jul 12 00:09:13.404077 containerd[1440]: time="2025-07-12T00:09:13.404029927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:09:13.406132 containerd[1440]: time="2025-07-12T00:09:13.406098024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:13.408671 containerd[1440]: time="2025-07-12T00:09:13.408618342Z" level=info msg="CreateContainer within sandbox \"58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:09:13.428720 containerd[1440]: time="2025-07-12T00:09:13.428665640Z" level=info msg="CreateContainer within sandbox \"58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"95d267681ac04dc3453e530674eef5283c432958a44b745d4df7999661d5764b\"" Jul 12 00:09:13.429337 containerd[1440]: time="2025-07-12T00:09:13.429246388Z" level=info msg="StartContainer for \"95d267681ac04dc3453e530674eef5283c432958a44b745d4df7999661d5764b\"" Jul 12 00:09:13.461528 systemd[1]: Started cri-containerd-95d267681ac04dc3453e530674eef5283c432958a44b745d4df7999661d5764b.scope - libcontainer container 95d267681ac04dc3453e530674eef5283c432958a44b745d4df7999661d5764b. Jul 12 00:09:13.475632 sshd[4936]: Accepted publickey for core from 10.0.0.1 port 56690 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:13.479521 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:13.485350 systemd-logind[1421]: New session 8 of user core. Jul 12 00:09:13.491520 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:09:13.505931 containerd[1440]: time="2025-07-12T00:09:13.505841534Z" level=info msg="StartContainer for \"95d267681ac04dc3453e530674eef5283c432958a44b745d4df7999661d5764b\" returns successfully" Jul 12 00:09:13.799377 sshd[4936]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:13.803777 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:56690.service: Deactivated successfully. Jul 12 00:09:13.806022 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:09:13.807029 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:09:13.808109 systemd-logind[1421]: Removed session 8. Jul 12 00:09:13.830821 containerd[1440]: time="2025-07-12T00:09:13.830767746Z" level=info msg="StopPodSandbox for \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\"" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.878 [INFO][5006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.878 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" iface="eth0" netns="/var/run/netns/cni-b7a8b1db-2ae0-5471-134a-59291227cae6" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.878 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" iface="eth0" netns="/var/run/netns/cni-b7a8b1db-2ae0-5471-134a-59291227cae6" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.879 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" iface="eth0" netns="/var/run/netns/cni-b7a8b1db-2ae0-5471-134a-59291227cae6" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.879 [INFO][5006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.879 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.901 [INFO][5014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.901 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.901 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.910 [WARNING][5014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.910 [INFO][5014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.912 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:13.916835 containerd[1440]: 2025-07-12 00:09:13.914 [INFO][5006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:13.917264 containerd[1440]: time="2025-07-12T00:09:13.916996583Z" level=info msg="TearDown network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\" successfully" Jul 12 00:09:13.917264 containerd[1440]: time="2025-07-12T00:09:13.917023665Z" level=info msg="StopPodSandbox for \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\" returns successfully" Jul 12 00:09:13.917538 kubelet[2458]: E0712 00:09:13.917515 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:13.919534 containerd[1440]: time="2025-07-12T00:09:13.919411696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ltzl2,Uid:6eb7bd0f-4915-42cf-9421-dbbd624c0e64,Namespace:kube-system,Attempt:1,}" Jul 12 00:09:13.988897 systemd[1]: run-containerd-runc-k8s.io-95d267681ac04dc3453e530674eef5283c432958a44b745d4df7999661d5764b-runc.PiS1Ws.mount: Deactivated successfully. Jul 12 00:09:13.989009 systemd[1]: run-netns-cni\x2db7a8b1db\x2d2ae0\x2d5471\x2d134a\x2d59291227cae6.mount: Deactivated successfully. Jul 12 00:09:14.043126 systemd-networkd[1367]: cali9690dee0d5b: Link UP Jul 12 00:09:14.043475 systemd-networkd[1367]: cali9690dee0d5b: Gained carrier Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:13.964 [INFO][5021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0 coredns-668d6bf9bc- kube-system 6eb7bd0f-4915-42cf-9421-dbbd624c0e64 1058 0 2025-07-12 00:08:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ltzl2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9690dee0d5b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:13.965 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:13.993 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" HandleID="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:13.994 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" HandleID="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ltzl2", "timestamp":"2025-07-12 00:09:13.993926745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:13.994 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:13.994 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:13.994 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.004 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.010 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.015 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.018 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.021 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.021 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.023 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.029 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.036 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.036 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" host="localhost" Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.037 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:14.058871 containerd[1440]: 2025-07-12 00:09:14.037 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" HandleID="k8s-pod-network.b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:14.059608 containerd[1440]: 2025-07-12 00:09:14.039 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6eb7bd0f-4915-42cf-9421-dbbd624c0e64", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ltzl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9690dee0d5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:14.059608 containerd[1440]: 2025-07-12 00:09:14.040 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:14.059608 containerd[1440]: 2025-07-12 00:09:14.040 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9690dee0d5b ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:14.059608 containerd[1440]: 2025-07-12 00:09:14.043 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:14.059608 containerd[1440]: 2025-07-12 00:09:14.044 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6eb7bd0f-4915-42cf-9421-dbbd624c0e64", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca", Pod:"coredns-668d6bf9bc-ltzl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9690dee0d5b", MAC:"d2:88:80:ae:8d:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:14.059608 containerd[1440]: 2025-07-12 00:09:14.056 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca" Namespace="kube-system" Pod="coredns-668d6bf9bc-ltzl2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:14.082786 containerd[1440]: time="2025-07-12T00:09:14.082523284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:14.082786 containerd[1440]: time="2025-07-12T00:09:14.082598287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:14.082786 containerd[1440]: time="2025-07-12T00:09:14.082614648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:14.083298 containerd[1440]: time="2025-07-12T00:09:14.082729853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:14.114563 systemd[1]: Started cri-containerd-b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca.scope - libcontainer container b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca. Jul 12 00:09:14.132355 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:09:14.164486 containerd[1440]: time="2025-07-12T00:09:14.164419227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ltzl2,Uid:6eb7bd0f-4915-42cf-9421-dbbd624c0e64,Namespace:kube-system,Attempt:1,} returns sandbox id \"b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca\"" Jul 12 00:09:14.165787 kubelet[2458]: E0712 00:09:14.165747 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:14.172671 containerd[1440]: time="2025-07-12T00:09:14.172506877Z" level=info msg="CreateContainer within sandbox \"b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:09:14.175828 systemd-networkd[1367]: calia58214b041f: Gained IPv6LL Jul 12 00:09:14.201958 containerd[1440]: time="2025-07-12T00:09:14.201901141Z" level=info msg="CreateContainer within sandbox \"b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4be3fda0b148d113653e4391f70c592ef7127052698cb8454262ef1650adc75\"" Jul 12 00:09:14.203101 containerd[1440]: time="2025-07-12T00:09:14.203072914Z" level=info msg="StartContainer for \"e4be3fda0b148d113653e4391f70c592ef7127052698cb8454262ef1650adc75\"" Jul 12 00:09:14.228510 systemd[1]: Started cri-containerd-e4be3fda0b148d113653e4391f70c592ef7127052698cb8454262ef1650adc75.scope - libcontainer container e4be3fda0b148d113653e4391f70c592ef7127052698cb8454262ef1650adc75. Jul 12 00:09:14.264337 containerd[1440]: time="2025-07-12T00:09:14.264291473Z" level=info msg="StartContainer for \"e4be3fda0b148d113653e4391f70c592ef7127052698cb8454262ef1650adc75\" returns successfully" Jul 12 00:09:14.305495 systemd-networkd[1367]: cali7d42ca79d07: Gained IPv6LL Jul 12 00:09:14.976171 containerd[1440]: time="2025-07-12T00:09:14.976122015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.977799 containerd[1440]: time="2025-07-12T00:09:14.977751529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:09:14.979659 containerd[1440]: time="2025-07-12T00:09:14.979614534Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.982834 containerd[1440]: time="2025-07-12T00:09:14.982745838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.983326 containerd[1440]: time="2025-07-12T00:09:14.983272022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.577129476s" Jul 12 00:09:14.983326 containerd[1440]: time="2025-07-12T00:09:14.983324144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:14.987705 containerd[1440]: time="2025-07-12T00:09:14.987199841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:14.988499 containerd[1440]: time="2025-07-12T00:09:14.988462979Z" level=info msg="CreateContainer within sandbox \"5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:15.003612 containerd[1440]: time="2025-07-12T00:09:15.003531505Z" level=info msg="CreateContainer within sandbox \"5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6736333f383e601657402b87c4f372b7dcbfd9b8ed1115ac43d29e151f21e6c8\"" Jul 12 00:09:15.004484 containerd[1440]: time="2025-07-12T00:09:15.004400744Z" level=info msg="StartContainer for \"6736333f383e601657402b87c4f372b7dcbfd9b8ed1115ac43d29e151f21e6c8\"" Jul 12 00:09:15.062506 systemd[1]: Started cri-containerd-6736333f383e601657402b87c4f372b7dcbfd9b8ed1115ac43d29e151f21e6c8.scope - libcontainer container 6736333f383e601657402b87c4f372b7dcbfd9b8ed1115ac43d29e151f21e6c8. Jul 12 00:09:15.105107 kubelet[2458]: I0712 00:09:15.104078 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:15.105107 kubelet[2458]: E0712 00:09:15.104687 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:15.127390 kubelet[2458]: I0712 00:09:15.127319 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-t657p" podStartSLOduration=24.041084045 podStartE2EDuration="26.127297875s" podCreationTimestamp="2025-07-12 00:08:49 +0000 UTC" firstStartedPulling="2025-07-12 00:09:11.318923269 +0000 UTC m=+41.575159029" lastFinishedPulling="2025-07-12 00:09:13.405137099 +0000 UTC m=+43.661372859" observedRunningTime="2025-07-12 00:09:14.114881643 +0000 UTC m=+44.371117403" watchObservedRunningTime="2025-07-12 00:09:15.127297875 +0000 UTC m=+45.383533635" Jul 12 00:09:15.127610 kubelet[2458]: I0712 00:09:15.127584 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ltzl2" podStartSLOduration=39.127578648 podStartE2EDuration="39.127578648s" podCreationTimestamp="2025-07-12 00:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:09:15.125988977 +0000 UTC m=+45.382224737" watchObservedRunningTime="2025-07-12 00:09:15.127578648 +0000 UTC m=+45.383814408" Jul 12 00:09:15.178021 containerd[1440]: time="2025-07-12T00:09:15.177959019Z" level=info msg="StartContainer for \"6736333f383e601657402b87c4f372b7dcbfd9b8ed1115ac43d29e151f21e6c8\" returns successfully" Jul 12 00:09:15.225634 containerd[1440]: time="2025-07-12T00:09:15.225563586Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:15.226250 containerd[1440]: time="2025-07-12T00:09:15.226156133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:09:15.230169 containerd[1440]: time="2025-07-12T00:09:15.230109189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 242.865786ms" Jul 12 00:09:15.230169 containerd[1440]: time="2025-07-12T00:09:15.230160431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:15.232324 containerd[1440]: time="2025-07-12T00:09:15.232245045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:09:15.235418 containerd[1440]: time="2025-07-12T00:09:15.235376945Z" level=info msg="CreateContainer within sandbox \"4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:15.248310 containerd[1440]: time="2025-07-12T00:09:15.248242439Z" level=info msg="CreateContainer within sandbox \"4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3659754e76108e5bfdcb59be1f5c4d90ec7d5c87f0f7630d96a75e5365f374b8\"" Jul 12 00:09:15.248903 containerd[1440]: time="2025-07-12T00:09:15.248873148Z" level=info msg="StartContainer for \"3659754e76108e5bfdcb59be1f5c4d90ec7d5c87f0f7630d96a75e5365f374b8\"" Jul 12 00:09:15.278495 systemd[1]: Started cri-containerd-3659754e76108e5bfdcb59be1f5c4d90ec7d5c87f0f7630d96a75e5365f374b8.scope - libcontainer container 3659754e76108e5bfdcb59be1f5c4d90ec7d5c87f0f7630d96a75e5365f374b8. Jul 12 00:09:15.319590 containerd[1440]: time="2025-07-12T00:09:15.319514504Z" level=info msg="StartContainer for \"3659754e76108e5bfdcb59be1f5c4d90ec7d5c87f0f7630d96a75e5365f374b8\" returns successfully" Jul 12 00:09:15.711036 systemd-networkd[1367]: cali9690dee0d5b: Gained IPv6LL Jul 12 00:09:16.121235 kubelet[2458]: E0712 00:09:16.121111 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:16.155737 kubelet[2458]: I0712 00:09:16.154848 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd4964cc-mvr8g" podStartSLOduration=27.571553962 podStartE2EDuration="31.154828798s" podCreationTimestamp="2025-07-12 00:08:45 +0000 UTC" firstStartedPulling="2025-07-12 00:09:11.401013032 +0000 UTC m=+41.657248792" lastFinishedPulling="2025-07-12 00:09:14.984287868 +0000 UTC m=+45.240523628" observedRunningTime="2025-07-12 00:09:16.154311055 +0000 UTC m=+46.410546855" watchObservedRunningTime="2025-07-12 00:09:16.154828798 +0000 UTC m=+46.411064558" Jul 12 00:09:16.258023 kubelet[2458]: I0712 00:09:16.257941 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd4964cc-p2r92" podStartSLOduration=27.568196416 podStartE2EDuration="31.257920664s" podCreationTimestamp="2025-07-12 00:08:45 +0000 UTC" firstStartedPulling="2025-07-12 00:09:11.541506671 +0000 UTC m=+41.797742431" lastFinishedPulling="2025-07-12 00:09:15.231230919 +0000 UTC m=+45.487466679" observedRunningTime="2025-07-12 00:09:16.256505962 +0000 UTC m=+46.512741762" watchObservedRunningTime="2025-07-12 00:09:16.257920664 +0000 UTC m=+46.514156424" Jul 12 00:09:16.578930 containerd[1440]: time="2025-07-12T00:09:16.578880134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:16.580238 containerd[1440]: time="2025-07-12T00:09:16.579469879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:09:16.582461 containerd[1440]: time="2025-07-12T00:09:16.581218756Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:16.585287 containerd[1440]: time="2025-07-12T00:09:16.585213490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.352902723s" Jul 12 00:09:16.585456 containerd[1440]: time="2025-07-12T00:09:16.585255052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:09:16.586988 containerd[1440]: time="2025-07-12T00:09:16.586750718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:09:16.588328 containerd[1440]: time="2025-07-12T00:09:16.588242183Z" level=info msg="CreateContainer within sandbox \"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:09:16.619021 containerd[1440]: time="2025-07-12T00:09:16.618967966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:16.629510 containerd[1440]: time="2025-07-12T00:09:16.629459745Z" level=info msg="CreateContainer within sandbox \"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65e9d488550e6242ce6150381a86add2e2e7ac7be1d1d774b9b45f90824b3ce0\"" Jul 12 00:09:16.630565 containerd[1440]: time="2025-07-12T00:09:16.630530111Z" level=info msg="StartContainer for \"65e9d488550e6242ce6150381a86add2e2e7ac7be1d1d774b9b45f90824b3ce0\"" Jul 12 00:09:16.674541 systemd[1]: Started cri-containerd-65e9d488550e6242ce6150381a86add2e2e7ac7be1d1d774b9b45f90824b3ce0.scope - libcontainer container 65e9d488550e6242ce6150381a86add2e2e7ac7be1d1d774b9b45f90824b3ce0. Jul 12 00:09:16.742943 containerd[1440]: time="2025-07-12T00:09:16.742875942Z" level=info msg="StartContainer for \"65e9d488550e6242ce6150381a86add2e2e7ac7be1d1d774b9b45f90824b3ce0\" returns successfully" Jul 12 00:09:17.124094 kubelet[2458]: I0712 00:09:17.123440 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:17.124094 kubelet[2458]: I0712 00:09:17.123515 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:17.124094 kubelet[2458]: E0712 00:09:17.123743 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:09:18.155203 containerd[1440]: time="2025-07-12T00:09:18.155143013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:18.155772 containerd[1440]: time="2025-07-12T00:09:18.155732598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:09:18.156937 containerd[1440]: time="2025-07-12T00:09:18.156895087Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:18.159923 containerd[1440]: time="2025-07-12T00:09:18.159110700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:18.159989 containerd[1440]: time="2025-07-12T00:09:18.159919614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.573125294s" Jul 12 00:09:18.159989 containerd[1440]: time="2025-07-12T00:09:18.159959575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:09:18.161044 containerd[1440]: time="2025-07-12T00:09:18.160842213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:09:18.173329 containerd[1440]: time="2025-07-12T00:09:18.173265694Z" level=info msg="CreateContainer within sandbox \"77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:09:18.271340 containerd[1440]: time="2025-07-12T00:09:18.271256004Z" level=info msg="CreateContainer within sandbox \"77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"775ed8d153e224e281f6b74ac448db8fe298ec732984d2611720ad683252ecfc\"" Jul 12 00:09:18.272137 containerd[1440]: time="2025-07-12T00:09:18.272105760Z" level=info msg="StartContainer for \"775ed8d153e224e281f6b74ac448db8fe298ec732984d2611720ad683252ecfc\"" Jul 12 00:09:18.308509 systemd[1]: Started cri-containerd-775ed8d153e224e281f6b74ac448db8fe298ec732984d2611720ad683252ecfc.scope - libcontainer container 775ed8d153e224e281f6b74ac448db8fe298ec732984d2611720ad683252ecfc. Jul 12 00:09:18.344611 containerd[1440]: time="2025-07-12T00:09:18.344547719Z" level=info msg="StartContainer for \"775ed8d153e224e281f6b74ac448db8fe298ec732984d2611720ad683252ecfc\" returns successfully" Jul 12 00:09:18.818101 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:56702.service - OpenSSH per-connection server daemon (10.0.0.1:56702). Jul 12 00:09:18.819197 kubelet[2458]: I0712 00:09:18.819165 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:18.889371 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 56702 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:18.891388 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:18.896366 systemd-logind[1421]: New session 9 of user core. Jul 12 00:09:18.901839 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:09:19.160673 kubelet[2458]: I0712 00:09:19.159892 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-748bcb9cdf-s67t2" podStartSLOduration=23.683935266 podStartE2EDuration="29.159873554s" podCreationTimestamp="2025-07-12 00:08:50 +0000 UTC" firstStartedPulling="2025-07-12 00:09:12.684694436 +0000 UTC m=+42.940930196" lastFinishedPulling="2025-07-12 00:09:18.160632724 +0000 UTC m=+48.416868484" observedRunningTime="2025-07-12 00:09:19.159827513 +0000 UTC m=+49.416063313" watchObservedRunningTime="2025-07-12 00:09:19.159873554 +0000 UTC m=+49.416109314" Jul 12 00:09:19.356781 containerd[1440]: time="2025-07-12T00:09:19.356726455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:19.359295 containerd[1440]: time="2025-07-12T00:09:19.358034909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:09:19.359456 containerd[1440]: time="2025-07-12T00:09:19.359422926Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:19.362526 containerd[1440]: time="2025-07-12T00:09:19.362479012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:19.363549 containerd[1440]: time="2025-07-12T00:09:19.363514815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.202638321s" Jul 12 00:09:19.363620 containerd[1440]: time="2025-07-12T00:09:19.363551376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:09:19.369068 containerd[1440]: time="2025-07-12T00:09:19.369024321Z" level=info msg="CreateContainer within sandbox \"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:09:19.434463 containerd[1440]: time="2025-07-12T00:09:19.434328049Z" level=info msg="CreateContainer within sandbox \"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b5b002a2e6e7510e7abdc9d2627d331e3a8248847d100632efea13eab92c8b5e\"" Jul 12 00:09:19.435991 containerd[1440]: time="2025-07-12T00:09:19.435086800Z" level=info msg="StartContainer for \"b5b002a2e6e7510e7abdc9d2627d331e3a8248847d100632efea13eab92c8b5e\"" Jul 12 00:09:19.488721 systemd[1]: Started cri-containerd-b5b002a2e6e7510e7abdc9d2627d331e3a8248847d100632efea13eab92c8b5e.scope - libcontainer container b5b002a2e6e7510e7abdc9d2627d331e3a8248847d100632efea13eab92c8b5e. Jul 12 00:09:19.538509 sshd[5330]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:19.541296 containerd[1440]: time="2025-07-12T00:09:19.541235248Z" level=info msg="StartContainer for \"b5b002a2e6e7510e7abdc9d2627d331e3a8248847d100632efea13eab92c8b5e\" returns successfully" Jul 12 00:09:19.544520 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:56702.service: Deactivated successfully. Jul 12 00:09:19.548175 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:09:19.549467 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:09:19.550482 systemd-logind[1421]: Removed session 9. Jul 12 00:09:19.922179 kubelet[2458]: I0712 00:09:19.922088 2458 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:09:19.925169 kubelet[2458]: I0712 00:09:19.925140 2458 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:09:20.159250 kubelet[2458]: I0712 00:09:20.159185 2458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lc77w" podStartSLOduration=23.413476267 podStartE2EDuration="30.159168998s" podCreationTimestamp="2025-07-12 00:08:50 +0000 UTC" firstStartedPulling="2025-07-12 00:09:12.620340267 +0000 UTC m=+42.876575987" lastFinishedPulling="2025-07-12 00:09:19.366032958 +0000 UTC m=+49.622268718" observedRunningTime="2025-07-12 00:09:20.158747501 +0000 UTC m=+50.414983261" watchObservedRunningTime="2025-07-12 00:09:20.159168998 +0000 UTC m=+50.415404758" Jul 12 00:09:24.549483 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:42428.service - OpenSSH per-connection server daemon (10.0.0.1:42428). Jul 12 00:09:24.585631 sshd[5464]: Accepted publickey for core from 10.0.0.1 port 42428 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:24.587118 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:24.591051 systemd-logind[1421]: New session 10 of user core. Jul 12 00:09:24.606511 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:09:24.803397 sshd[5464]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:24.811526 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:42428.service: Deactivated successfully. Jul 12 00:09:24.813858 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:09:24.815878 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:09:24.822679 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). Jul 12 00:09:24.824338 systemd-logind[1421]: Removed session 10. Jul 12 00:09:24.860526 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:24.861898 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:24.865773 systemd-logind[1421]: New session 11 of user core. Jul 12 00:09:24.877737 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:09:25.070713 sshd[5482]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:25.081257 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:42440.service: Deactivated successfully. Jul 12 00:09:25.087968 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:09:25.091129 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:09:25.100676 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:42442.service - OpenSSH per-connection server daemon (10.0.0.1:42442). Jul 12 00:09:25.103631 systemd-logind[1421]: Removed session 11. Jul 12 00:09:25.142174 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 42442 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:25.142991 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:25.148935 systemd-logind[1421]: New session 12 of user core. Jul 12 00:09:25.158519 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:09:25.306233 sshd[5496]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:25.309695 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:42442.service: Deactivated successfully. Jul 12 00:09:25.312325 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:09:25.313668 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:09:25.314733 systemd-logind[1421]: Removed session 12. Jul 12 00:09:29.825005 containerd[1440]: time="2025-07-12T00:09:29.824673427Z" level=info msg="StopPodSandbox for \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\"" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.861 [WARNING][5527] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9x52-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e479b3c4-2d61-4209-8ff7-602a3f90e035", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45", Pod:"coredns-668d6bf9bc-m9x52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia165a6cfb8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.861 [INFO][5527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.861 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" iface="eth0" netns="" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.862 [INFO][5527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.862 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.892 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.892 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.892 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.901 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.901 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.903 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:29.907838 containerd[1440]: 2025-07-12 00:09:29.905 [INFO][5527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.907838 containerd[1440]: time="2025-07-12T00:09:29.907719291Z" level=info msg="TearDown network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\" successfully" Jul 12 00:09:29.907838 containerd[1440]: time="2025-07-12T00:09:29.907742972Z" level=info msg="StopPodSandbox for \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\" returns successfully" Jul 12 00:09:29.908539 containerd[1440]: time="2025-07-12T00:09:29.908295591Z" level=info msg="RemovePodSandbox for \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\"" Jul 12 00:09:29.921037 containerd[1440]: time="2025-07-12T00:09:29.920980121Z" level=info msg="Forcibly stopping sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\"" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.955 [WARNING][5556] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9x52-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e479b3c4-2d61-4209-8ff7-602a3f90e035", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f712deb89fe07a0f4e44ab60f3069f73918e60597274750242819eab47023a45", Pod:"coredns-668d6bf9bc-m9x52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia165a6cfb8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.955 [INFO][5556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.955 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" iface="eth0" netns="" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.955 [INFO][5556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.955 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.975 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.975 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.975 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.985 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.985 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" HandleID="k8s-pod-network.f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Workload="localhost-k8s-coredns--668d6bf9bc--m9x52-eth0" Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.986 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:29.991089 containerd[1440]: 2025-07-12 00:09:29.988 [INFO][5556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4" Jul 12 00:09:29.991089 containerd[1440]: time="2025-07-12T00:09:29.990583189Z" level=info msg="TearDown network for sandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\" successfully" Jul 12 00:09:30.008254 containerd[1440]: time="2025-07-12T00:09:30.008024364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:30.008254 containerd[1440]: time="2025-07-12T00:09:30.008119568Z" level=info msg="RemovePodSandbox \"f90ef50490f10256f7bf30b9a3fd3f949838b84cd6d836030221449d935db8c4\" returns successfully" Jul 12 00:09:30.008760 containerd[1440]: time="2025-07-12T00:09:30.008701548Z" level=info msg="StopPodSandbox for \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\"" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.044 [WARNING][5583] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" WorkloadEndpoint="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.044 [INFO][5583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.044 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" iface="eth0" netns="" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.044 [INFO][5583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.044 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.068 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.068 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.068 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.077 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.077 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.078 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.084383 containerd[1440]: 2025-07-12 00:09:30.080 [INFO][5583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.084383 containerd[1440]: time="2025-07-12T00:09:30.083838102Z" level=info msg="TearDown network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\" successfully" Jul 12 00:09:30.084383 containerd[1440]: time="2025-07-12T00:09:30.083869863Z" level=info msg="StopPodSandbox for \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\" returns successfully" Jul 12 00:09:30.085127 containerd[1440]: time="2025-07-12T00:09:30.084514846Z" level=info msg="RemovePodSandbox for \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\"" Jul 12 00:09:30.085127 containerd[1440]: time="2025-07-12T00:09:30.084541967Z" level=info msg="Forcibly stopping sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\"" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.118 [WARNING][5611] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" WorkloadEndpoint="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.118 [INFO][5611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.118 [INFO][5611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" iface="eth0" netns="" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.118 [INFO][5611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.118 [INFO][5611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.136 [INFO][5620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.136 [INFO][5620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.136 [INFO][5620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.145 [WARNING][5620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.145 [INFO][5620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" HandleID="k8s-pod-network.13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Workload="localhost-k8s-whisker--b95bdc6c4--c5hvb-eth0" Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.146 [INFO][5620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.150470 containerd[1440]: 2025-07-12 00:09:30.148 [INFO][5611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045" Jul 12 00:09:30.150860 containerd[1440]: time="2025-07-12T00:09:30.150616123Z" level=info msg="TearDown network for sandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\" successfully" Jul 12 00:09:30.153662 containerd[1440]: time="2025-07-12T00:09:30.153628749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:30.153729 containerd[1440]: time="2025-07-12T00:09:30.153694071Z" level=info msg="RemovePodSandbox \"13fff1462e9d218015f0f4f4cf5d75458c42e4f84bd79baf018552f8ab29c045\" returns successfully" Jul 12 00:09:30.154187 containerd[1440]: time="2025-07-12T00:09:30.154147487Z" level=info msg="StopPodSandbox for \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\"" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.191 [WARNING][5638] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"de5af1cf-2d96-4cd0-b664-9eb849bca08f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e", Pod:"calico-apiserver-dd4964cc-mvr8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44461e75400", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.192 [INFO][5638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.192 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" iface="eth0" netns="" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.192 [INFO][5638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.192 [INFO][5638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.212 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.212 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.212 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.222 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.222 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.223 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.227632 containerd[1440]: 2025-07-12 00:09:30.225 [INFO][5638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.227632 containerd[1440]: time="2025-07-12T00:09:30.227501459Z" level=info msg="TearDown network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\" successfully" Jul 12 00:09:30.227632 containerd[1440]: time="2025-07-12T00:09:30.227530580Z" level=info msg="StopPodSandbox for \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\" returns successfully" Jul 12 00:09:30.228517 containerd[1440]: time="2025-07-12T00:09:30.228230605Z" level=info msg="RemovePodSandbox for \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\"" Jul 12 00:09:30.228517 containerd[1440]: time="2025-07-12T00:09:30.228262926Z" level=info msg="Forcibly stopping sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\"" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.260 [WARNING][5663] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"de5af1cf-2d96-4cd0-b664-9eb849bca08f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5883e05f06cdd2b1e36f13c6ca86de867ed32c2442ee9646914f475a05a3169e", Pod:"calico-apiserver-dd4964cc-mvr8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44461e75400", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.260 [INFO][5663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.260 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" iface="eth0" netns="" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.260 [INFO][5663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.260 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.277 [INFO][5671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.277 [INFO][5671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.278 [INFO][5671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.285 [WARNING][5671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.286 [INFO][5671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" HandleID="k8s-pod-network.49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Workload="localhost-k8s-calico--apiserver--dd4964cc--mvr8g-eth0" Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.287 [INFO][5671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.290786 containerd[1440]: 2025-07-12 00:09:30.288 [INFO][5663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497" Jul 12 00:09:30.292306 containerd[1440]: time="2025-07-12T00:09:30.291240374Z" level=info msg="TearDown network for sandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\" successfully" Jul 12 00:09:30.294166 containerd[1440]: time="2025-07-12T00:09:30.294135115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:30.294345 containerd[1440]: time="2025-07-12T00:09:30.294326362Z" level=info msg="RemovePodSandbox \"49f1552eeb5a69b3ef7eb1252bd2eb73e8767fb5f02cd83f57d1fcb2933f1497\" returns successfully" Jul 12 00:09:30.294935 containerd[1440]: time="2025-07-12T00:09:30.294911742Z" level=info msg="StopPodSandbox for \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\"" Jul 12 00:09:30.322351 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:42456.service - OpenSSH per-connection server daemon (10.0.0.1:42456). Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.336 [WARNING][5688] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--t657p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4163ae3e-c117-44b8-afd3-05959bb3dc8f", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554", Pod:"goldmane-768f4c5c69-t657p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd375581a38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.336 [INFO][5688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.336 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" iface="eth0" netns="" Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.336 [INFO][5688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.336 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.356 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.356 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.356 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.365 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.365 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.367 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.372955 containerd[1440]: 2025-07-12 00:09:30.369 [INFO][5688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.374722 containerd[1440]: time="2025-07-12T00:09:30.374587056Z" level=info msg="TearDown network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\" successfully" Jul 12 00:09:30.374722 containerd[1440]: time="2025-07-12T00:09:30.374618537Z" level=info msg="StopPodSandbox for \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\" returns successfully" Jul 12 00:09:30.375200 containerd[1440]: time="2025-07-12T00:09:30.375019671Z" level=info msg="RemovePodSandbox for \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\"" Jul 12 00:09:30.375200 containerd[1440]: time="2025-07-12T00:09:30.375051472Z" level=info msg="Forcibly stopping sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\"" Jul 12 00:09:30.381932 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 42456 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:30.383896 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:30.388386 systemd-logind[1421]: New session 13 of user core. Jul 12 00:09:30.397454 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.411 [WARNING][5717] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--t657p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4163ae3e-c117-44b8-afd3-05959bb3dc8f", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58bd4ab06f82961683710ce0cfbb65b98bfdcd1bd60ea0f40153e95d03bb8554", Pod:"goldmane-768f4c5c69-t657p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd375581a38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.411 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.411 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" iface="eth0" netns="" Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.411 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.411 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.431 [INFO][5727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.431 [INFO][5727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.431 [INFO][5727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.440 [WARNING][5727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.440 [INFO][5727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" HandleID="k8s-pod-network.5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Workload="localhost-k8s-goldmane--768f4c5c69--t657p-eth0" Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.441 [INFO][5727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.449221 containerd[1440]: 2025-07-12 00:09:30.445 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c" Jul 12 00:09:30.449221 containerd[1440]: time="2025-07-12T00:09:30.448072472Z" level=info msg="TearDown network for sandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\" successfully" Jul 12 00:09:30.451161 containerd[1440]: time="2025-07-12T00:09:30.451124059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:30.451271 containerd[1440]: time="2025-07-12T00:09:30.451190941Z" level=info msg="RemovePodSandbox \"5ad832846c93f0590e41d627fe2bd449433dcca25b49c2d6f65f6e2452328a9c\" returns successfully" Jul 12 00:09:30.451904 containerd[1440]: time="2025-07-12T00:09:30.451803603Z" level=info msg="StopPodSandbox for \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\"" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.492 [WARNING][5752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lc77w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5145d8e7-900c-4ad8-a934-1061d118e33b", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105", Pod:"csi-node-driver-lc77w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia58214b041f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.492 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.492 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" iface="eth0" netns="" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.492 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.492 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.514 [INFO][5762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.515 [INFO][5762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.515 [INFO][5762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.523 [WARNING][5762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.523 [INFO][5762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.524 [INFO][5762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.530949 containerd[1440]: 2025-07-12 00:09:30.528 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.530949 containerd[1440]: time="2025-07-12T00:09:30.530794252Z" level=info msg="TearDown network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\" successfully" Jul 12 00:09:30.530949 containerd[1440]: time="2025-07-12T00:09:30.530820453Z" level=info msg="StopPodSandbox for \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\" returns successfully" Jul 12 00:09:30.532207 containerd[1440]: time="2025-07-12T00:09:30.531882690Z" level=info msg="RemovePodSandbox for \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\"" Jul 12 00:09:30.532207 containerd[1440]: time="2025-07-12T00:09:30.531915211Z" level=info msg="Forcibly stopping sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\"" Jul 12 00:09:30.610136 sshd[5696]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.572 [WARNING][5780] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lc77w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5145d8e7-900c-4ad8-a934-1061d118e33b", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e36433255edbe8b4792f6dda4226cebe3a55bc60905bf0ed76bd6ded062f2105", Pod:"csi-node-driver-lc77w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia58214b041f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.573 [INFO][5780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.573 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" iface="eth0" netns="" Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.573 [INFO][5780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.573 [INFO][5780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.594 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.594 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.594 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.602 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.603 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" HandleID="k8s-pod-network.fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Workload="localhost-k8s-csi--node--driver--lc77w-eth0" Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.605 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.611535 containerd[1440]: 2025-07-12 00:09:30.607 [INFO][5780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a" Jul 12 00:09:30.611535 containerd[1440]: time="2025-07-12T00:09:30.610470126Z" level=info msg="TearDown network for sandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\" successfully" Jul 12 00:09:30.614181 containerd[1440]: time="2025-07-12T00:09:30.613254943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:30.614181 containerd[1440]: time="2025-07-12T00:09:30.613379348Z" level=info msg="RemovePodSandbox \"fa4f82fbc2f2b01e45be3537fb0f063831892e403ada631ffc4f02042feb1f0a\" returns successfully" Jul 12 00:09:30.614738 containerd[1440]: time="2025-07-12T00:09:30.614715634Z" level=info msg="StopPodSandbox for \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\"" Jul 12 00:09:30.620270 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:42456.service: Deactivated successfully. Jul 12 00:09:30.622218 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:09:30.624237 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:09:30.630955 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:42472.service - OpenSSH per-connection server daemon (10.0.0.1:42472). Jul 12 00:09:30.633432 systemd-logind[1421]: Removed session 13. Jul 12 00:09:30.675885 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 42472 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:30.677341 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:30.682051 systemd-logind[1421]: New session 14 of user core. Jul 12 00:09:30.688432 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.667 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8de351fb-c5ee-4b34-82bf-ce57122f3ecf", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37", Pod:"calico-apiserver-dd4964cc-p2r92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacf6100f001", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.667 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.667 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" iface="eth0" netns="" Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.667 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.667 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.685 [INFO][5821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.685 [INFO][5821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.685 [INFO][5821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.694 [WARNING][5821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.694 [INFO][5821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.695 [INFO][5821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.702571 containerd[1440]: 2025-07-12 00:09:30.699 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.702571 containerd[1440]: time="2025-07-12T00:09:30.702567354Z" level=info msg="TearDown network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\" successfully" Jul 12 00:09:30.702976 containerd[1440]: time="2025-07-12T00:09:30.702590715Z" level=info msg="StopPodSandbox for \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\" returns successfully" Jul 12 00:09:30.703658 containerd[1440]: time="2025-07-12T00:09:30.703631712Z" level=info msg="RemovePodSandbox for \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\"" Jul 12 00:09:30.703729 containerd[1440]: time="2025-07-12T00:09:30.703662433Z" level=info msg="Forcibly stopping sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\"" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.740 [WARNING][5839] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0", GenerateName:"calico-apiserver-dd4964cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8de351fb-c5ee-4b34-82bf-ce57122f3ecf", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4964cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b9e154b5e56372441ca28d7952cc7f50711b1ebc8c9c7d1b2c15b58dfc5ae37", Pod:"calico-apiserver-dd4964cc-p2r92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacf6100f001", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.741 [INFO][5839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.741 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" iface="eth0" netns="" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.741 [INFO][5839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.741 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.760 [INFO][5849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.760 [INFO][5849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.760 [INFO][5849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.769 [WARNING][5849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.769 [INFO][5849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" HandleID="k8s-pod-network.3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Workload="localhost-k8s-calico--apiserver--dd4964cc--p2r92-eth0" Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.771 [INFO][5849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.775118 containerd[1440]: 2025-07-12 00:09:30.772 [INFO][5839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd" Jul 12 00:09:30.775544 containerd[1440]: time="2025-07-12T00:09:30.775166420Z" level=info msg="TearDown network for sandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\" successfully" Jul 12 00:09:30.779483 containerd[1440]: time="2025-07-12T00:09:30.779438169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:30.779554 containerd[1440]: time="2025-07-12T00:09:30.779510532Z" level=info msg="RemovePodSandbox \"3449f8cb15c3db1990d3e4a433a9880c7db5cae60a4ad6ba822f695ec90feffd\" returns successfully" Jul 12 00:09:30.780106 containerd[1440]: time="2025-07-12T00:09:30.780069792Z" level=info msg="StopPodSandbox for \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\"" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.819 [WARNING][5871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6eb7bd0f-4915-42cf-9421-dbbd624c0e64", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca", Pod:"coredns-668d6bf9bc-ltzl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9690dee0d5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.819 [INFO][5871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.819 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" iface="eth0" netns="" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.819 [INFO][5871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.819 [INFO][5871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.839 [INFO][5881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.840 [INFO][5881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.840 [INFO][5881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.849 [WARNING][5881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.849 [INFO][5881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.851 [INFO][5881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.854643 containerd[1440]: 2025-07-12 00:09:30.852 [INFO][5871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.855463 containerd[1440]: time="2025-07-12T00:09:30.854672007Z" level=info msg="TearDown network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\" successfully" Jul 12 00:09:30.855463 containerd[1440]: time="2025-07-12T00:09:30.854695728Z" level=info msg="StopPodSandbox for \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\" returns successfully" Jul 12 00:09:30.855463 containerd[1440]: time="2025-07-12T00:09:30.855185465Z" level=info msg="RemovePodSandbox for \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\"" Jul 12 00:09:30.855463 containerd[1440]: time="2025-07-12T00:09:30.855213546Z" level=info msg="Forcibly stopping sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\"" Jul 12 00:09:30.928560 sshd[5815]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:30.938462 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:42472.service: Deactivated successfully. Jul 12 00:09:30.940259 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.901 [WARNING][5900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6eb7bd0f-4915-42cf-9421-dbbd624c0e64", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b78129d6d283ee733d8bf4d9584be809b84db5b00e3b30dfcd064f445cc105ca", Pod:"coredns-668d6bf9bc-ltzl2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9690dee0d5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.901 [INFO][5900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.901 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" iface="eth0" netns="" Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.901 [INFO][5900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.901 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.921 [INFO][5909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.921 [INFO][5909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.921 [INFO][5909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.933 [WARNING][5909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.935 [INFO][5909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" HandleID="k8s-pod-network.3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Workload="localhost-k8s-coredns--668d6bf9bc--ltzl2-eth0" Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.936 [INFO][5909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:30.940465 containerd[1440]: 2025-07-12 00:09:30.938 [INFO][5900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7" Jul 12 00:09:30.940833 containerd[1440]: time="2025-07-12T00:09:30.940501176Z" level=info msg="TearDown network for sandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\" successfully" Jul 12 00:09:30.941534 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:09:30.950972 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:42476.service - OpenSSH per-connection server daemon (10.0.0.1:42476). Jul 12 00:09:30.952306 systemd-logind[1421]: Removed session 14. Jul 12 00:09:30.960037 containerd[1440]: time="2025-07-12T00:09:30.959869455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:30.960037 containerd[1440]: time="2025-07-12T00:09:30.959946418Z" level=info msg="RemovePodSandbox \"3c015b573ebfc47a1199de2936543649b6ae60a91d27253fefbe79780c35bcd7\" returns successfully" Jul 12 00:09:30.961782 containerd[1440]: time="2025-07-12T00:09:30.961756041Z" level=info msg="StopPodSandbox for \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\"" Jul 12 00:09:30.994380 sshd[5920]: Accepted publickey for core from 10.0.0.1 port 42476 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:30.995795 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:31.003829 systemd-logind[1421]: New session 15 of user core. Jul 12 00:09:31.007063 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:30.995 [WARNING][5931] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0", GenerateName:"calico-kube-controllers-748bcb9cdf-", Namespace:"calico-system", SelfLink:"", UID:"f1339e87-f0d0-41f4-9691-3d9be7937b47", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"748bcb9cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7", Pod:"calico-kube-controllers-748bcb9cdf-s67t2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d42ca79d07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:30.995 [INFO][5931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:30.995 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" iface="eth0" netns="" Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:30.995 [INFO][5931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:30.995 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:31.018 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:31.018 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:31.018 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:31.026 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:31.026 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:31.027 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:31.031221 containerd[1440]: 2025-07-12 00:09:31.029 [INFO][5931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.031843 containerd[1440]: time="2025-07-12T00:09:31.031255507Z" level=info msg="TearDown network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\" successfully" Jul 12 00:09:31.031843 containerd[1440]: time="2025-07-12T00:09:31.031295708Z" level=info msg="StopPodSandbox for \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\" returns successfully" Jul 12 00:09:31.031843 containerd[1440]: time="2025-07-12T00:09:31.031694922Z" level=info msg="RemovePodSandbox for \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\"" Jul 12 00:09:31.031843 containerd[1440]: time="2025-07-12T00:09:31.031721443Z" level=info msg="Forcibly stopping sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\"" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.069 [WARNING][5959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0", GenerateName:"calico-kube-controllers-748bcb9cdf-", Namespace:"calico-system", SelfLink:"", UID:"f1339e87-f0d0-41f4-9691-3d9be7937b47", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"748bcb9cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77c080209516244f51027119d1e83960fd12649a1048a38f4396e1c4d0ca6ad7", Pod:"calico-kube-controllers-748bcb9cdf-s67t2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d42ca79d07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.069 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.069 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" iface="eth0" netns="" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.069 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.069 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.091 [INFO][5973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.091 [INFO][5973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.091 [INFO][5973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.100 [WARNING][5973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.100 [INFO][5973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" HandleID="k8s-pod-network.e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Workload="localhost-k8s-calico--kube--controllers--748bcb9cdf--s67t2-eth0" Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.103 [INFO][5973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:31.107461 containerd[1440]: 2025-07-12 00:09:31.105 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a" Jul 12 00:09:31.107917 containerd[1440]: time="2025-07-12T00:09:31.107491432Z" level=info msg="TearDown network for sandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\" successfully" Jul 12 00:09:31.111314 containerd[1440]: time="2025-07-12T00:09:31.110251167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:31.111314 containerd[1440]: time="2025-07-12T00:09:31.110335690Z" level=info msg="RemovePodSandbox \"e878a33b58eab7f696421c1fcbb3c519d53dcc0e1ed90fe9f3505fb5d5ceab8a\" returns successfully" Jul 12 00:09:31.787863 sshd[5920]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:31.795499 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:42476.service: Deactivated successfully. Jul 12 00:09:31.800218 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:09:31.802479 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:09:31.809641 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:42478.service - OpenSSH per-connection server daemon (10.0.0.1:42478). Jul 12 00:09:31.813259 systemd-logind[1421]: Removed session 15. Jul 12 00:09:31.853031 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 42478 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:31.854840 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:31.859554 systemd-logind[1421]: New session 16 of user core. Jul 12 00:09:31.869483 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:09:32.278380 sshd[5994]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:32.289663 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:42478.service: Deactivated successfully. Jul 12 00:09:32.292793 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:09:32.295370 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:09:32.304256 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:42494.service - OpenSSH per-connection server daemon (10.0.0.1:42494). Jul 12 00:09:32.306179 systemd-logind[1421]: Removed session 16. Jul 12 00:09:32.342736 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 42494 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:32.344077 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:32.347750 systemd-logind[1421]: New session 17 of user core. Jul 12 00:09:32.356471 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:09:32.511079 sshd[6006]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:32.513991 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:42494.service: Deactivated successfully. Jul 12 00:09:32.516822 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:09:32.518237 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:09:32.519025 systemd-logind[1421]: Removed session 17. Jul 12 00:09:37.528303 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:59402.service - OpenSSH per-connection server daemon (10.0.0.1:59402). Jul 12 00:09:37.578729 sshd[6071]: Accepted publickey for core from 10.0.0.1 port 59402 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:37.580328 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:37.585500 systemd-logind[1421]: New session 18 of user core. Jul 12 00:09:37.594533 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:09:37.739471 sshd[6071]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:37.743749 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:59402.service: Deactivated successfully. Jul 12 00:09:37.747194 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:09:37.748038 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:09:37.749526 systemd-logind[1421]: Removed session 18. Jul 12 00:09:41.372669 kubelet[2458]: I0712 00:09:41.372619 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:42.751335 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:57962.service - OpenSSH per-connection server daemon (10.0.0.1:57962). Jul 12 00:09:42.801554 sshd[6087]: Accepted publickey for core from 10.0.0.1 port 57962 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:42.803591 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:42.808070 systemd-logind[1421]: New session 19 of user core. Jul 12 00:09:42.822519 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:09:43.040457 sshd[6087]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:43.043928 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:57962.service: Deactivated successfully. Jul 12 00:09:43.046981 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:09:43.048464 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:09:43.049824 systemd-logind[1421]: Removed session 19. Jul 12 00:09:44.950794 systemd[1]: run-containerd-runc-k8s.io-95d267681ac04dc3453e530674eef5283c432958a44b745d4df7999661d5764b-runc.0P8ofj.mount: Deactivated successfully. Jul 12 00:09:48.053670 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:57978.service - OpenSSH per-connection server daemon (10.0.0.1:57978). Jul 12 00:09:48.095329 sshd[6128]: Accepted publickey for core from 10.0.0.1 port 57978 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:09:48.096175 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:48.100411 systemd-logind[1421]: New session 20 of user core. Jul 12 00:09:48.109492 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:09:48.293909 sshd[6128]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:48.296481 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:57978.service: Deactivated successfully. Jul 12 00:09:48.298244 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:09:48.300367 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:09:48.301634 systemd-logind[1421]: Removed session 20. Jul 12 00:09:49.692258 kubelet[2458]: I0712 00:09:49.692064 2458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:49.831094 kubelet[2458]: E0712 00:09:49.831050 2458 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"