Jul 12 00:26:05.914082 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:26:05.914103 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:26:05.914113 kernel: KASLR enabled Jul 12 00:26:05.914119 kernel: efi: EFI v2.7 by EDK II Jul 12 00:26:05.914124 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 12 00:26:05.914130 kernel: random: crng init done Jul 12 00:26:05.914137 kernel: ACPI: Early table checksum verification disabled Jul 12 00:26:05.914144 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 12 00:26:05.914150 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:26:05.914157 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914164 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914170 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914176 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914182 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914190 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914199 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914206 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914212 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:26:05.914218 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:26:05.914225 kernel: NUMA: Failed to initialise from firmware Jul 12 00:26:05.914231 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:26:05.914258 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Jul 12 00:26:05.914265 kernel: Zone ranges: Jul 12 00:26:05.914271 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:26:05.914277 kernel: DMA32 empty Jul 12 00:26:05.914286 kernel: Normal empty Jul 12 00:26:05.914292 kernel: Movable zone start for each node Jul 12 00:26:05.914299 kernel: Early memory node ranges Jul 12 00:26:05.914305 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 12 00:26:05.914311 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 12 00:26:05.914318 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 12 00:26:05.914324 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:26:05.914330 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:26:05.914336 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:26:05.914342 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:26:05.914349 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:26:05.914355 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:26:05.914363 kernel: psci: probing for conduit method from ACPI. Jul 12 00:26:05.914370 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:26:05.914376 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:26:05.914385 kernel: psci: Trusted OS migration not required Jul 12 00:26:05.914392 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:26:05.914399 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:26:05.914407 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:26:05.914414 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:26:05.914421 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:26:05.914428 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:26:05.914434 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:26:05.914441 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:26:05.914448 kernel: CPU features: detected: Spectre-v4 Jul 12 00:26:05.914455 kernel: CPU features: detected: Spectre-BHB Jul 12 00:26:05.914461 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:26:05.914468 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:26:05.914476 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:26:05.914483 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:26:05.914490 kernel: alternatives: applying boot alternatives Jul 12 00:26:05.914498 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:26:05.914505 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:26:05.914512 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:26:05.914518 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:26:05.914526 kernel: Fallback order for Node 0: 0 Jul 12 00:26:05.914532 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:26:05.914539 kernel: Policy zone: DMA Jul 12 00:26:05.914545 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:26:05.914554 kernel: software IO TLB: area num 4. Jul 12 00:26:05.914561 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 12 00:26:05.914568 kernel: Memory: 2386396K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185892K reserved, 0K cma-reserved) Jul 12 00:26:05.914575 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:26:05.914582 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:26:05.914589 kernel: rcu: RCU event tracing is enabled. Jul 12 00:26:05.914596 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:26:05.914603 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:26:05.914611 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:26:05.914617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:26:05.914624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:26:05.914631 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:26:05.914639 kernel: GICv3: 256 SPIs implemented Jul 12 00:26:05.914646 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:26:05.914652 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:26:05.914659 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:26:05.914666 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:26:05.914672 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:26:05.914679 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:26:05.914686 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:26:05.914693 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 12 00:26:05.914699 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 12 00:26:05.914706 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:26:05.914714 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:26:05.914721 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:26:05.914728 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:26:05.914735 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:26:05.914742 kernel: arm-pv: using stolen time PV Jul 12 00:26:05.914749 kernel: Console: colour dummy device 80x25 Jul 12 00:26:05.914755 kernel: ACPI: Core revision 20230628 Jul 12 00:26:05.914762 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:26:05.914769 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:26:05.914776 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:26:05.914784 kernel: landlock: Up and running. Jul 12 00:26:05.914791 kernel: SELinux: Initializing. Jul 12 00:26:05.914798 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:26:05.914805 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:26:05.914812 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:26:05.914819 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:26:05.914826 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:26:05.914833 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:26:05.914840 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:26:05.914848 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:26:05.914855 kernel: Remapping and enabling EFI services. Jul 12 00:26:05.914862 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:26:05.914868 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:26:05.914875 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:26:05.914882 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 12 00:26:05.914889 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:26:05.914896 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:26:05.914903 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:26:05.914910 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:26:05.914918 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 12 00:26:05.914925 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:26:05.914937 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:26:05.914946 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:26:05.914954 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:26:05.914961 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 12 00:26:05.914969 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:26:05.914976 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:26:05.914983 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:26:05.914992 kernel: SMP: Total of 4 processors activated. Jul 12 00:26:05.915000 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:26:05.915007 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:26:05.915014 kernel: CPU features: detected: Common not Private translations Jul 12 00:26:05.915022 kernel: CPU features: detected: CRC32 instructions Jul 12 00:26:05.915029 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:26:05.915036 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:26:05.915043 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:26:05.915052 kernel: CPU features: detected: Privileged Access Never Jul 12 00:26:05.915059 kernel: CPU features: detected: RAS Extension Support Jul 12 00:26:05.915071 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:26:05.915080 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:26:05.915087 kernel: alternatives: applying system-wide alternatives Jul 12 00:26:05.915094 kernel: devtmpfs: initialized Jul 12 00:26:05.915102 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:26:05.915110 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:26:05.915117 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:26:05.915126 kernel: SMBIOS 3.0.0 present. Jul 12 00:26:05.915133 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 12 00:26:05.915141 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:26:05.915148 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:26:05.915155 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:26:05.915163 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:26:05.915170 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:26:05.915177 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 12 00:26:05.915184 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:26:05.915193 kernel: cpuidle: using governor menu Jul 12 00:26:05.915200 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:26:05.915208 kernel: ASID allocator initialised with 32768 entries Jul 12 00:26:05.915215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:26:05.915222 kernel: Serial: AMBA PL011 UART driver Jul 12 00:26:05.915229 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:26:05.915318 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:26:05.915327 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:26:05.915334 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:26:05.915344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:26:05.915352 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:26:05.915359 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:26:05.915378 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:26:05.915386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:26:05.915394 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:26:05.915401 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:26:05.915408 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:26:05.915416 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:26:05.915424 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:26:05.915432 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:26:05.915439 kernel: ACPI: Interpreter enabled Jul 12 00:26:05.915446 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:26:05.915454 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:26:05.915461 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:26:05.915468 kernel: printk: console [ttyAMA0] enabled Jul 12 00:26:05.915476 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:26:05.915614 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:26:05.915688 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:26:05.915754 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:26:05.915815 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:26:05.915877 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:26:05.915887 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:26:05.915894 kernel: PCI host bridge to bus 0000:00 Jul 12 00:26:05.915963 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:26:05.916024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:26:05.916097 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:26:05.916162 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:26:05.916254 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:26:05.916336 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:26:05.916405 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:26:05.916476 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:26:05.916543 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:26:05.916855 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:26:05.917219 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:26:05.917365 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:26:05.917430 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:26:05.917498 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:26:05.917561 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:26:05.917571 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:26:05.917579 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:26:05.917586 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:26:05.917594 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:26:05.917601 kernel: iommu: Default domain type: Translated Jul 12 00:26:05.917608 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:26:05.917615 kernel: efivars: Registered efivars operations Jul 12 00:26:05.917623 kernel: vgaarb: loaded Jul 12 00:26:05.917633 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:26:05.917640 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:26:05.917648 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:26:05.917655 kernel: pnp: PnP ACPI init Jul 12 00:26:05.917733 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:26:05.917744 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:26:05.917752 kernel: NET: Registered PF_INET protocol family Jul 12 00:26:05.917759 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:26:05.917769 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:26:05.917776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:26:05.917784 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:26:05.917791 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:26:05.917798 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:26:05.917824 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:26:05.917831 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:26:05.917838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:26:05.917846 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:26:05.917855 kernel: kvm [1]: HYP mode not available Jul 12 00:26:05.917862 kernel: Initialise system trusted keyrings Jul 12 00:26:05.917869 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:26:05.917876 kernel: Key type asymmetric registered Jul 12 00:26:05.917883 kernel: Asymmetric key parser 'x509' registered Jul 12 00:26:05.917891 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:26:05.917898 kernel: io scheduler mq-deadline registered Jul 12 00:26:05.917905 kernel: io scheduler kyber registered Jul 12 00:26:05.917913 kernel: io scheduler bfq registered Jul 12 00:26:05.917922 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:26:05.917929 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:26:05.917937 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:26:05.918008 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:26:05.918018 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:26:05.918026 kernel: thunder_xcv, ver 1.0 Jul 12 00:26:05.918033 kernel: thunder_bgx, ver 1.0 Jul 12 00:26:05.918040 kernel: nicpf, ver 1.0 Jul 12 00:26:05.918047 kernel: nicvf, ver 1.0 Jul 12 00:26:05.918137 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:26:05.918203 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:26:05 UTC (1752279965) Jul 12 00:26:05.918213 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:26:05.918221 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:26:05.918228 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:26:05.918250 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:26:05.918257 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:26:05.918265 kernel: Segment Routing with IPv6 Jul 12 00:26:05.918275 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:26:05.918282 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:26:05.918289 kernel: Key type dns_resolver registered Jul 12 00:26:05.918296 kernel: registered taskstats version 1 Jul 12 00:26:05.918304 kernel: Loading compiled-in X.509 certificates Jul 12 00:26:05.918311 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:26:05.918318 kernel: Key type .fscrypt registered Jul 12 00:26:05.918325 kernel: Key type fscrypt-provisioning registered Jul 12 00:26:05.918333 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:26:05.918342 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:26:05.918349 kernel: ima: No architecture policies found Jul 12 00:26:05.918357 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:26:05.918364 kernel: clk: Disabling unused clocks Jul 12 00:26:05.918371 kernel: Freeing unused kernel memory: 39424K Jul 12 00:26:05.918378 kernel: Run /init as init process Jul 12 00:26:05.918385 kernel: with arguments: Jul 12 00:26:05.918392 kernel: /init Jul 12 00:26:05.918399 kernel: with environment: Jul 12 00:26:05.918408 kernel: HOME=/ Jul 12 00:26:05.918415 kernel: TERM=linux Jul 12 00:26:05.918423 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:26:05.918432 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:26:05.918442 systemd[1]: Detected virtualization kvm. Jul 12 00:26:05.918450 systemd[1]: Detected architecture arm64. Jul 12 00:26:05.918458 systemd[1]: Running in initrd. Jul 12 00:26:05.918465 systemd[1]: No hostname configured, using default hostname. Jul 12 00:26:05.918474 systemd[1]: Hostname set to . Jul 12 00:26:05.918483 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:26:05.918490 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:26:05.918498 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:26:05.918506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:26:05.918515 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:26:05.918523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:26:05.918532 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:26:05.918540 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:26:05.918550 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:26:05.918558 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:26:05.918595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:26:05.918602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:26:05.918610 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:26:05.918619 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:26:05.918627 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:26:05.918635 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:26:05.918642 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:26:05.918650 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:26:05.918658 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:26:05.918666 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:26:05.918704 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:26:05.918713 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:26:05.918723 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:26:05.918731 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:26:05.918738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:26:05.918746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:26:05.918754 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:26:05.918762 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:26:05.918770 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:26:05.918778 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:26:05.918786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:26:05.918796 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:26:05.918804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:26:05.918811 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:26:05.918820 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:26:05.918830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:26:05.918857 systemd-journald[239]: Collecting audit messages is disabled. Jul 12 00:26:05.918905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:26:05.918913 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:26:05.918924 systemd-journald[239]: Journal started Jul 12 00:26:05.918943 systemd-journald[239]: Runtime Journal (/run/log/journal/de1d0befe5ab492d93947b8b3e9e2a68) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:26:05.909543 systemd-modules-load[240]: Inserted module 'overlay' Jul 12 00:26:05.923881 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:26:05.926259 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:26:05.926528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:26:05.930771 kernel: Bridge firewalling registered Jul 12 00:26:05.928604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:26:05.928738 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 12 00:26:05.932696 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:26:05.935626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:26:05.936616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:26:05.943504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:26:05.945576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:26:05.946641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:26:05.953350 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:26:05.955139 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:26:05.964320 dracut-cmdline[278]: dracut-dracut-053 Jul 12 00:26:05.966748 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:26:05.980592 systemd-resolved[279]: Positive Trust Anchors: Jul 12 00:26:05.980611 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:26:05.980643 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:26:05.985228 systemd-resolved[279]: Defaulting to hostname 'linux'. Jul 12 00:26:05.986164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:26:05.987391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:26:06.034268 kernel: SCSI subsystem initialized Jul 12 00:26:06.039254 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:26:06.046278 kernel: iscsi: registered transport (tcp) Jul 12 00:26:06.061265 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:26:06.061287 kernel: QLogic iSCSI HBA Driver Jul 12 00:26:06.110139 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:26:06.120399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:26:06.136617 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:26:06.137868 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:26:06.137915 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:26:06.183268 kernel: raid6: neonx8 gen() 15774 MB/s Jul 12 00:26:06.200258 kernel: raid6: neonx4 gen() 15665 MB/s Jul 12 00:26:06.217253 kernel: raid6: neonx2 gen() 13212 MB/s Jul 12 00:26:06.234259 kernel: raid6: neonx1 gen() 10457 MB/s Jul 12 00:26:06.251259 kernel: raid6: int64x8 gen() 6959 MB/s Jul 12 00:26:06.268261 kernel: raid6: int64x4 gen() 7340 MB/s Jul 12 00:26:06.285254 kernel: raid6: int64x2 gen() 6125 MB/s Jul 12 00:26:06.302262 kernel: raid6: int64x1 gen() 5049 MB/s Jul 12 00:26:06.302287 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s Jul 12 00:26:06.319281 kernel: raid6: .... xor() 11932 MB/s, rmw enabled Jul 12 00:26:06.319326 kernel: raid6: using neon recovery algorithm Jul 12 00:26:06.324621 kernel: xor: measuring software checksum speed Jul 12 00:26:06.324644 kernel: 8regs : 19299 MB/sec Jul 12 00:26:06.325251 kernel: 32regs : 19679 MB/sec Jul 12 00:26:06.326250 kernel: arm64_neon : 24712 MB/sec Jul 12 00:26:06.326262 kernel: xor: using function: arm64_neon (24712 MB/sec) Jul 12 00:26:06.386262 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:26:06.400328 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:26:06.411437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:26:06.422671 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 12 00:26:06.425900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:26:06.445443 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:26:06.459913 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 12 00:26:06.490615 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:26:06.502415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:26:06.544176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:26:06.551395 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:26:06.563691 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:26:06.565075 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:26:06.566271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:26:06.569762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:26:06.579546 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:26:06.592576 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:26:06.604220 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:26:06.599412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:26:06.607374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:26:06.607401 kernel: GPT:9289727 != 19775487 Jul 12 00:26:06.607411 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:26:06.608603 kernel: GPT:9289727 != 19775487 Jul 12 00:26:06.608616 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:26:06.610257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:26:06.611052 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:26:06.611227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:26:06.613665 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:26:06.614544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:26:06.614680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:26:06.616408 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:26:06.623488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:26:06.634265 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (510) Jul 12 00:26:06.635259 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (514) Jul 12 00:26:06.635641 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:26:06.636865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:26:06.645089 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:26:06.651774 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:26:06.654218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:26:06.658499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:26:06.668392 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:26:06.670112 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:26:06.674624 disk-uuid[552]: Primary Header is updated. Jul 12 00:26:06.674624 disk-uuid[552]: Secondary Entries is updated. Jul 12 00:26:06.674624 disk-uuid[552]: Secondary Header is updated. Jul 12 00:26:06.686277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:26:06.687282 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:26:06.691259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:26:07.695268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:26:07.695790 disk-uuid[553]: The operation has completed successfully. Jul 12 00:26:07.717971 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:26:07.718093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:26:07.737367 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:26:07.740102 sh[574]: Success Jul 12 00:26:07.752342 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:26:07.782577 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:26:07.803592 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:26:07.805661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:26:07.816474 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:26:07.816532 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:26:07.816543 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:26:07.818353 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:26:07.818380 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:26:07.821723 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:26:07.822868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:26:07.834376 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:26:07.835741 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:26:07.842546 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:26:07.842592 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:26:07.842613 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:26:07.844304 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:26:07.852362 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:26:07.852433 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:26:07.857889 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:26:07.863418 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:26:07.931609 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:26:07.940393 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:26:07.968134 systemd-networkd[766]: lo: Link UP Jul 12 00:26:07.968147 systemd-networkd[766]: lo: Gained carrier Jul 12 00:26:07.969122 systemd-networkd[766]: Enumeration completed Jul 12 00:26:07.969211 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:26:07.969709 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:26:07.969712 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:26:07.976389 ignition[662]: Ignition 2.19.0 Jul 12 00:26:07.970557 systemd[1]: Reached target network.target - Network. Jul 12 00:26:07.976396 ignition[662]: Stage: fetch-offline Jul 12 00:26:07.970574 systemd-networkd[766]: eth0: Link UP Jul 12 00:26:07.976428 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:26:07.970578 systemd-networkd[766]: eth0: Gained carrier Jul 12 00:26:07.976436 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:26:07.970584 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:26:07.976576 ignition[662]: parsed url from cmdline: "" Jul 12 00:26:07.976579 ignition[662]: no config URL provided Jul 12 00:26:07.976584 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:26:07.976590 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:26:07.976616 ignition[662]: op(1): [started] loading QEMU firmware config module Jul 12 00:26:07.976620 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:26:07.989036 ignition[662]: op(1): [finished] loading QEMU firmware config module Jul 12 00:26:07.989067 ignition[662]: QEMU firmware config was not found. Ignoring... Jul 12 00:26:07.990293 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:26:08.028338 ignition[662]: parsing config with SHA512: b1cb7c67f8a91934ebf860fb6dfa1657cb5f5781c0415e06adb9a119b64ff3a3364ddb4e62148d1b0c1c5bc238e039db84cedb8b4ee779f1e23450a6f2ca6afb Jul 12 00:26:08.033495 unknown[662]: fetched base config from "system" Jul 12 00:26:08.033506 unknown[662]: fetched user config from "qemu" Jul 12 00:26:08.034071 ignition[662]: fetch-offline: fetch-offline passed Jul 12 00:26:08.034416 ignition[662]: Ignition finished successfully Jul 12 00:26:08.036259 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:26:08.037562 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:26:08.044419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:26:08.054947 ignition[774]: Ignition 2.19.0 Jul 12 00:26:08.054971 ignition[774]: Stage: kargs Jul 12 00:26:08.055149 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:26:08.055159 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:26:08.056101 ignition[774]: kargs: kargs passed Jul 12 00:26:08.056150 ignition[774]: Ignition finished successfully Jul 12 00:26:08.058701 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:26:08.071394 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:26:08.080562 ignition[783]: Ignition 2.19.0 Jul 12 00:26:08.080572 ignition[783]: Stage: disks Jul 12 00:26:08.080747 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:26:08.080756 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:26:08.083921 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:26:08.081703 ignition[783]: disks: disks passed Jul 12 00:26:08.084820 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:26:08.081748 ignition[783]: Ignition finished successfully Jul 12 00:26:08.086067 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:26:08.087347 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:26:08.088730 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:26:08.090054 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:26:08.104391 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:26:08.115152 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:26:08.118870 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:26:08.121900 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:26:08.165268 kernel: EXT4-fs (vda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:26:08.165323 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:26:08.166447 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:26:08.178327 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:26:08.179903 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:26:08.180992 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:26:08.181078 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:26:08.186278 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (802) Jul 12 00:26:08.181105 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:26:08.186781 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:26:08.190452 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:26:08.190470 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:26:08.190480 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:26:08.191396 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:26:08.192983 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:26:08.193963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:26:08.231554 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:26:08.235868 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:26:08.239779 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:26:08.243014 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:26:08.314484 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:26:08.325361 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:26:08.326736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:26:08.332263 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:26:08.348919 ignition[915]: INFO : Ignition 2.19.0 Jul 12 00:26:08.348919 ignition[915]: INFO : Stage: mount Jul 12 00:26:08.350335 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:26:08.350335 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:26:08.350335 ignition[915]: INFO : mount: mount passed Jul 12 00:26:08.350335 ignition[915]: INFO : Ignition finished successfully Jul 12 00:26:08.351473 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:26:08.354553 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:26:08.364360 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:26:08.815139 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:26:08.829462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:26:08.834273 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (931) Jul 12 00:26:08.834325 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:26:08.835688 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:26:08.836250 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:26:08.838267 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:26:08.839034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:26:08.854563 ignition[949]: INFO : Ignition 2.19.0 Jul 12 00:26:08.854563 ignition[949]: INFO : Stage: files Jul 12 00:26:08.855725 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:26:08.855725 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:26:08.855725 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:26:08.858273 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:26:08.858273 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:26:08.860165 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:26:08.860165 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:26:08.860165 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:26:08.860165 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:26:08.860165 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:26:08.860165 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:26:08.860165 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:26:08.858980 unknown[949]: wrote ssh authorized keys file for user: core Jul 12 00:26:08.900133 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:26:08.988267 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:26:08.988267 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:26:08.991362 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:26:09.376776 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:26:09.948506 systemd-networkd[766]: eth0: Gained IPv6LL Jul 12 00:26:09.989803 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:26:09.989803 ignition[949]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 12 00:26:09.992531 ignition[949]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:26:10.012734 ignition[949]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:26:10.016342 ignition[949]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:26:10.018365 ignition[949]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:26:10.018365 ignition[949]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:26:10.018365 ignition[949]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:26:10.018365 ignition[949]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:26:10.018365 ignition[949]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:26:10.018365 ignition[949]: INFO : files: files passed Jul 12 00:26:10.018365 ignition[949]: INFO : Ignition finished successfully Jul 12 00:26:10.019622 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:26:10.032471 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:26:10.035329 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:26:10.037747 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:26:10.038546 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:26:10.041602 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:26:10.044642 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:26:10.044642 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:26:10.048363 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:26:10.047369 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:26:10.050560 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:26:10.063405 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:26:10.080981 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:26:10.081090 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:26:10.082686 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:26:10.083996 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:26:10.085298 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:26:10.085968 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:26:10.100456 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:26:10.117376 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:26:10.124815 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:26:10.125740 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:26:10.127232 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:26:10.128540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:26:10.128653 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:26:10.130564 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:26:10.132066 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:26:10.133370 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:26:10.134766 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:26:10.136389 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:26:10.138044 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:26:10.139531 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:26:10.140942 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:26:10.142338 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:26:10.143607 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:26:10.144712 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:26:10.144820 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:26:10.146555 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:26:10.147973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:26:10.149346 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:26:10.150730 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:26:10.151806 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:26:10.151912 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:26:10.154172 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:26:10.154294 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:26:10.155752 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:26:10.156871 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:26:10.161349 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:26:10.162297 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:26:10.163964 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:26:10.165255 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:26:10.165340 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:26:10.166588 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:26:10.166667 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:26:10.167918 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:26:10.168020 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:26:10.169520 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:26:10.169619 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:26:10.182438 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:26:10.183148 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:26:10.183292 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:26:10.185649 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:26:10.186393 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:26:10.186514 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:26:10.187546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:26:10.187644 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:26:10.192989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:26:10.193080 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:26:10.196631 ignition[1003]: INFO : Ignition 2.19.0 Jul 12 00:26:10.196631 ignition[1003]: INFO : Stage: umount Jul 12 00:26:10.196631 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:26:10.196631 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:26:10.198985 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:26:10.201398 ignition[1003]: INFO : umount: umount passed Jul 12 00:26:10.201398 ignition[1003]: INFO : Ignition finished successfully Jul 12 00:26:10.199076 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:26:10.202669 systemd[1]: Stopped target network.target - Network. Jul 12 00:26:10.203773 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:26:10.203835 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:26:10.205191 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:26:10.205228 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:26:10.206903 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:26:10.206946 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:26:10.208251 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:26:10.208295 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:26:10.210159 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:26:10.211812 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:26:10.214006 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:26:10.218312 systemd-networkd[766]: eth0: DHCPv6 lease lost Jul 12 00:26:10.220375 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:26:10.220481 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:26:10.222163 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:26:10.222191 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:26:10.231538 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:26:10.232286 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:26:10.232343 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:26:10.234107 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:26:10.238618 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:26:10.238714 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:26:10.242504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:26:10.242601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:26:10.244278 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:26:10.244329 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:26:10.245914 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:26:10.245957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:26:10.248671 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:26:10.248791 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:26:10.252489 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:26:10.252627 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:26:10.254117 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:26:10.254155 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:26:10.255439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:26:10.255467 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:26:10.257276 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:26:10.257326 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:26:10.259822 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:26:10.259866 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:26:10.262143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:26:10.262188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:26:10.273416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:26:10.274173 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:26:10.274226 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:26:10.275855 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:26:10.275895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:26:10.277546 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:26:10.278298 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:26:10.279820 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:26:10.279906 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:26:10.281777 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:26:10.282688 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:26:10.282747 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:26:10.284850 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:26:10.293514 systemd[1]: Switching root. Jul 12 00:26:10.327895 systemd-journald[239]: Journal stopped Jul 12 00:26:11.092021 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jul 12 00:26:11.092101 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:26:11.092120 kernel: SELinux: policy capability open_perms=1 Jul 12 00:26:11.092130 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:26:11.092147 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:26:11.092158 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:26:11.092168 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:26:11.092178 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:26:11.092190 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:26:11.092200 kernel: audit: type=1403 audit(1752279970.534:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:26:11.092211 systemd[1]: Successfully loaded SELinux policy in 33.606ms. Jul 12 00:26:11.092229 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.473ms. Jul 12 00:26:11.092277 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:26:11.092295 systemd[1]: Detected virtualization kvm. Jul 12 00:26:11.092306 systemd[1]: Detected architecture arm64. Jul 12 00:26:11.092317 systemd[1]: Detected first boot. Jul 12 00:26:11.092328 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:26:11.092339 zram_generator::config[1064]: No configuration found. Jul 12 00:26:11.092351 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:26:11.092362 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:26:11.092373 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:26:11.092387 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:26:11.092399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:26:11.092410 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:26:11.092421 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:26:11.092432 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:26:11.092443 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:26:11.092454 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:26:11.092465 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:26:11.092480 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:26:11.092494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:26:11.092505 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:26:11.092516 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:26:11.092527 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:26:11.092538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:26:11.092550 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:26:11.092561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:26:11.092572 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:26:11.092597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:26:11.092613 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:26:11.092625 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:26:11.092637 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:26:11.092648 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:26:11.092659 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:26:11.092670 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:26:11.092681 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:26:11.092692 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:26:11.092706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:26:11.092717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:26:11.092729 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:26:11.092740 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:26:11.092751 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:26:11.092763 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:26:11.092774 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:26:11.092785 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:26:11.092795 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:26:11.092808 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:26:11.092819 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:26:11.092830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:26:11.092841 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:26:11.092853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:26:11.092864 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:26:11.092875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:26:11.092886 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:26:11.092899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:26:11.092910 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:26:11.092921 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:26:11.092933 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:26:11.092944 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:26:11.092955 kernel: fuse: init (API version 7.39) Jul 12 00:26:11.092966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:26:11.092978 kernel: loop: module loaded Jul 12 00:26:11.092989 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:26:11.093002 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:26:11.093012 kernel: ACPI: bus type drm_connector registered Jul 12 00:26:11.093025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:26:11.093037 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:26:11.093056 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:26:11.093069 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:26:11.093102 systemd-journald[1143]: Collecting audit messages is disabled. Jul 12 00:26:11.093126 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:26:11.093140 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:26:11.093152 systemd-journald[1143]: Journal started Jul 12 00:26:11.093175 systemd-journald[1143]: Runtime Journal (/run/log/journal/de1d0befe5ab492d93947b8b3e9e2a68) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:26:11.095666 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:26:11.096631 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:26:11.097805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:26:11.098970 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:26:11.099137 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:26:11.100477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:26:11.100622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:26:11.101678 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:26:11.101830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:26:11.103152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:26:11.103331 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:26:11.104410 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:26:11.104557 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:26:11.105748 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:26:11.105942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:26:11.107116 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:26:11.108335 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:26:11.109571 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:26:11.115943 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:26:11.122950 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:26:11.132326 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:26:11.134331 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:26:11.135171 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:26:11.138429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:26:11.140523 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:26:11.141381 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:26:11.144423 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:26:11.145503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:26:11.149407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:26:11.151782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:26:11.158874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:26:11.160778 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:26:11.163618 systemd-journald[1143]: Time spent on flushing to /var/log/journal/de1d0befe5ab492d93947b8b3e9e2a68 is 15.268ms for 846 entries. Jul 12 00:26:11.163618 systemd-journald[1143]: System Journal (/var/log/journal/de1d0befe5ab492d93947b8b3e9e2a68) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:26:11.201301 systemd-journald[1143]: Received client request to flush runtime journal. Jul 12 00:26:11.166412 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:26:11.170630 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:26:11.173965 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:26:11.176162 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:26:11.177540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:26:11.189686 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:26:11.198183 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jul 12 00:26:11.198193 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jul 12 00:26:11.202224 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:26:11.204071 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:26:11.209401 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:26:11.233506 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:26:11.247466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:26:11.259108 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jul 12 00:26:11.259126 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jul 12 00:26:11.262897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:26:11.622491 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:26:11.630457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:26:11.650111 systemd-udevd[1226]: Using default interface naming scheme 'v255'. Jul 12 00:26:11.662931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:26:11.675642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:26:11.685636 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:26:11.695267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1231) Jul 12 00:26:11.695269 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 12 00:26:11.726981 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:26:11.748473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:26:11.797979 systemd-networkd[1234]: lo: Link UP Jul 12 00:26:11.797995 systemd-networkd[1234]: lo: Gained carrier Jul 12 00:26:11.798694 systemd-networkd[1234]: Enumeration completed Jul 12 00:26:11.799128 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:26:11.799131 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:26:11.799485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:26:11.799753 systemd-networkd[1234]: eth0: Link UP Jul 12 00:26:11.799757 systemd-networkd[1234]: eth0: Gained carrier Jul 12 00:26:11.799769 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:26:11.801408 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:26:11.804214 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:26:11.816015 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:26:11.819407 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:26:11.830329 systemd-networkd[1234]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:26:11.844192 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:26:11.861592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:26:11.871803 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:26:11.873149 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:26:11.884423 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:26:11.888042 lvm[1272]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:26:11.917896 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:26:11.919219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:26:11.920341 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:26:11.920383 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:26:11.921251 systemd[1]: Reached target machines.target - Containers. Jul 12 00:26:11.923131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:26:11.936403 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:26:11.938663 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:26:11.939669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:26:11.940594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:26:11.944406 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:26:11.950390 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:26:11.952221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:26:11.961104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:26:11.964383 kernel: loop0: detected capacity change from 0 to 114328 Jul 12 00:26:11.971990 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:26:11.972699 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:26:11.975354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:26:12.013282 kernel: loop1: detected capacity change from 0 to 114432 Jul 12 00:26:12.054257 kernel: loop2: detected capacity change from 0 to 203944 Jul 12 00:26:12.095355 kernel: loop3: detected capacity change from 0 to 114328 Jul 12 00:26:12.103281 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:26:12.109268 kernel: loop5: detected capacity change from 0 to 203944 Jul 12 00:26:12.121352 (sd-merge)[1293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:26:12.121750 (sd-merge)[1293]: Merged extensions into '/usr'. Jul 12 00:26:12.127063 systemd[1]: Reloading requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:26:12.127076 systemd[1]: Reloading... Jul 12 00:26:12.169275 zram_generator::config[1318]: No configuration found. Jul 12 00:26:12.275225 ldconfig[1276]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:26:12.279835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:12.328730 systemd[1]: Reloading finished in 201 ms. Jul 12 00:26:12.344325 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:26:12.347405 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:26:12.370490 systemd[1]: Starting ensure-sysext.service... Jul 12 00:26:12.372341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:26:12.375946 systemd[1]: Reloading requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:26:12.375962 systemd[1]: Reloading... Jul 12 00:26:12.390837 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:26:12.391107 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:26:12.391740 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:26:12.391955 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jul 12 00:26:12.392004 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jul 12 00:26:12.395779 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:26:12.395792 systemd-tmpfiles[1365]: Skipping /boot Jul 12 00:26:12.406205 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:26:12.406219 systemd-tmpfiles[1365]: Skipping /boot Jul 12 00:26:12.421390 zram_generator::config[1396]: No configuration found. Jul 12 00:26:12.509021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:12.554631 systemd[1]: Reloading finished in 178 ms. Jul 12 00:26:12.571023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:26:12.588208 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:26:12.590457 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:26:12.592456 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:26:12.597400 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:26:12.600558 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:26:12.605145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:26:12.606358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:26:12.611481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:26:12.613827 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:26:12.614722 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:26:12.620720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:26:12.620958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:26:12.624044 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:26:12.626688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:26:12.626833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:26:12.628363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:26:12.628537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:26:12.629934 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:26:12.631460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:26:12.637413 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:26:12.639643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:26:12.646598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:26:12.650599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:26:12.654574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:26:12.658605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:26:12.659514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:26:12.663611 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:26:12.665633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:26:12.665792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:26:12.667075 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:26:12.667279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:26:12.668661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:26:12.668803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:26:12.670940 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:26:12.673525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:26:12.678583 systemd[1]: Finished ensure-sysext.service. Jul 12 00:26:12.681793 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:26:12.685592 augenrules[1478]: No rules Jul 12 00:26:12.687470 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:26:12.688633 systemd-resolved[1439]: Positive Trust Anchors: Jul 12 00:26:12.688646 systemd-resolved[1439]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:26:12.688678 systemd-resolved[1439]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:26:12.689614 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:26:12.691212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:26:12.691326 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:26:12.697991 systemd-resolved[1439]: Defaulting to hostname 'linux'. Jul 12 00:26:12.699423 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:26:12.700196 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:26:12.700354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:26:12.701346 systemd[1]: Reached target network.target - Network. Jul 12 00:26:12.701978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:26:12.741011 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:26:12.742255 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:26:12.742302 systemd-timesyncd[1497]: Initial clock synchronization to Sat 2025-07-12 00:26:12.989907 UTC. Jul 12 00:26:12.742419 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:26:12.743233 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:26:12.744132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:26:12.745047 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:26:12.745962 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:26:12.745997 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:26:12.746656 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:26:12.747537 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:26:12.748396 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:26:12.749271 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:26:12.750365 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:26:12.752600 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:26:12.754593 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:26:12.761234 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:26:12.762157 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:26:12.762981 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:26:12.763899 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:26:12.763949 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:26:12.763968 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:26:12.765082 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:26:12.767051 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:26:12.768908 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:26:12.773438 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:26:12.774331 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:26:12.775348 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:26:12.780927 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:26:12.785066 jq[1503]: false Jul 12 00:26:12.793408 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:26:12.798323 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:26:12.801787 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:26:12.803596 dbus-daemon[1502]: [system] SELinux support is enabled Jul 12 00:26:12.807659 extend-filesystems[1504]: Found loop3 Jul 12 00:26:12.807659 extend-filesystems[1504]: Found loop4 Jul 12 00:26:12.807659 extend-filesystems[1504]: Found loop5 Jul 12 00:26:12.807659 extend-filesystems[1504]: Found vda Jul 12 00:26:12.807659 extend-filesystems[1504]: Found vda1 Jul 12 00:26:12.811411 extend-filesystems[1504]: Found vda2 Jul 12 00:26:12.811411 extend-filesystems[1504]: Found vda3 Jul 12 00:26:12.811411 extend-filesystems[1504]: Found usr Jul 12 00:26:12.811411 extend-filesystems[1504]: Found vda4 Jul 12 00:26:12.811411 extend-filesystems[1504]: Found vda6 Jul 12 00:26:12.811411 extend-filesystems[1504]: Found vda7 Jul 12 00:26:12.811411 extend-filesystems[1504]: Found vda9 Jul 12 00:26:12.811411 extend-filesystems[1504]: Checking size of /dev/vda9 Jul 12 00:26:12.808078 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:26:12.811414 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:26:12.814727 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:26:12.816386 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:26:12.820375 jq[1525]: true Jul 12 00:26:12.821626 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:26:12.821880 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:26:12.822137 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:26:12.822360 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:26:12.825705 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:26:12.825905 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:26:12.839571 extend-filesystems[1504]: Resized partition /dev/vda9 Jul 12 00:26:12.846487 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:26:12.847631 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:26:12.847667 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:26:12.850280 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:26:12.850307 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:26:12.852767 jq[1533]: true Jul 12 00:26:12.858375 extend-filesystems[1541]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:26:12.865549 tar[1531]: linux-arm64/helm Jul 12 00:26:12.870204 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1231) Jul 12 00:26:12.870295 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:26:12.899558 update_engine[1522]: I20250712 00:26:12.899084 1522 main.cc:92] Flatcar Update Engine starting Jul 12 00:26:12.902212 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:26:12.902437 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:26:12.902640 systemd-logind[1519]: New seat seat0. Jul 12 00:26:12.906081 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:26:12.907516 update_engine[1522]: I20250712 00:26:12.907182 1522 update_check_scheduler.cc:74] Next update check in 8m25s Jul 12 00:26:12.910170 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:26:12.915251 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:26:12.922677 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:26:12.928635 extend-filesystems[1541]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:26:12.928635 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:26:12.928635 extend-filesystems[1541]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:26:12.936424 extend-filesystems[1504]: Resized filesystem in /dev/vda9 Jul 12 00:26:12.933103 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:26:12.933434 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:26:12.954772 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:26:12.956629 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:26:12.962303 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:26:12.977938 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:26:13.082767 containerd[1535]: time="2025-07-12T00:26:13.082649526Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:26:13.109950 containerd[1535]: time="2025-07-12T00:26:13.109845773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111411 containerd[1535]: time="2025-07-12T00:26:13.111361379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111411 containerd[1535]: time="2025-07-12T00:26:13.111402205Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:26:13.111507 containerd[1535]: time="2025-07-12T00:26:13.111420020Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:26:13.111601 containerd[1535]: time="2025-07-12T00:26:13.111580522Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:26:13.111627 containerd[1535]: time="2025-07-12T00:26:13.111604646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111683 containerd[1535]: time="2025-07-12T00:26:13.111667948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111709 containerd[1535]: time="2025-07-12T00:26:13.111685021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111931 containerd[1535]: time="2025-07-12T00:26:13.111898225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111931 containerd[1535]: time="2025-07-12T00:26:13.111920123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111978 containerd[1535]: time="2025-07-12T00:26:13.111936453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:13.111978 containerd[1535]: time="2025-07-12T00:26:13.111947093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:13.112034 containerd[1535]: time="2025-07-12T00:26:13.112019508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:13.112259 containerd[1535]: time="2025-07-12T00:26:13.112233042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:13.112417 containerd[1535]: time="2025-07-12T00:26:13.112389667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:13.112441 containerd[1535]: time="2025-07-12T00:26:13.112419854Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:26:13.112516 containerd[1535]: time="2025-07-12T00:26:13.112502166Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:26:13.112566 containerd[1535]: time="2025-07-12T00:26:13.112554333Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:26:13.116264 containerd[1535]: time="2025-07-12T00:26:13.116232499Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:26:13.116318 containerd[1535]: time="2025-07-12T00:26:13.116306440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:26:13.116339 containerd[1535]: time="2025-07-12T00:26:13.116328544Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:26:13.116403 containerd[1535]: time="2025-07-12T00:26:13.116348091Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:26:13.116491 containerd[1535]: time="2025-07-12T00:26:13.116473580Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:26:13.116656 containerd[1535]: time="2025-07-12T00:26:13.116638577Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:26:13.117317 containerd[1535]: time="2025-07-12T00:26:13.117226558Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:26:13.117392 containerd[1535]: time="2025-07-12T00:26:13.117366976Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:26:13.117417 containerd[1535]: time="2025-07-12T00:26:13.117390276Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:26:13.117417 containerd[1535]: time="2025-07-12T00:26:13.117404339Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:26:13.117454 containerd[1535]: time="2025-07-12T00:26:13.117418195Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117454 containerd[1535]: time="2025-07-12T00:26:13.117436010Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117501 containerd[1535]: time="2025-07-12T00:26:13.117456464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117501 containerd[1535]: time="2025-07-12T00:26:13.117472259Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117501 containerd[1535]: time="2025-07-12T00:26:13.117487393Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117554 containerd[1535]: time="2025-07-12T00:26:13.117500177Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117554 containerd[1535]: time="2025-07-12T00:26:13.117516426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117554 containerd[1535]: time="2025-07-12T00:26:13.117529003Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:26:13.117627 containerd[1535]: time="2025-07-12T00:26:13.117549293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117627 containerd[1535]: time="2025-07-12T00:26:13.117588593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117627 containerd[1535]: time="2025-07-12T00:26:13.117602779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117627 containerd[1535]: time="2025-07-12T00:26:13.117621708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117709 containerd[1535]: time="2025-07-12T00:26:13.117635606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117709 containerd[1535]: time="2025-07-12T00:26:13.117649544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117709 containerd[1535]: time="2025-07-12T00:26:13.117661627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117709 containerd[1535]: time="2025-07-12T00:26:13.117674040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117709 containerd[1535]: time="2025-07-12T00:26:13.117686535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117709 containerd[1535]: time="2025-07-12T00:26:13.117700144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117819 containerd[1535]: time="2025-07-12T00:26:13.117713134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117819 containerd[1535]: time="2025-07-12T00:26:13.117730248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117819 containerd[1535]: time="2025-07-12T00:26:13.117743527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117819 containerd[1535]: time="2025-07-12T00:26:13.117759404Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:26:13.117819 containerd[1535]: time="2025-07-12T00:26:13.117785013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117819 containerd[1535]: time="2025-07-12T00:26:13.117797509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.117819 containerd[1535]: time="2025-07-12T00:26:13.117809633Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:26:13.117944 containerd[1535]: time="2025-07-12T00:26:13.117923081Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:26:13.117964 containerd[1535]: time="2025-07-12T00:26:13.117940855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:26:13.117964 containerd[1535]: time="2025-07-12T00:26:13.117952401Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:26:13.118003 containerd[1535]: time="2025-07-12T00:26:13.117964897Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:26:13.118003 containerd[1535]: time="2025-07-12T00:26:13.117974299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.118003 containerd[1535]: time="2025-07-12T00:26:13.117989351Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:26:13.118003 containerd[1535]: time="2025-07-12T00:26:13.117999950Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:26:13.118142 containerd[1535]: time="2025-07-12T00:26:13.118124986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:26:13.118684 containerd[1535]: time="2025-07-12T00:26:13.118545703Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:26:13.118801 containerd[1535]: time="2025-07-12T00:26:13.118685832Z" level=info msg="Connect containerd service" Jul 12 00:26:13.118801 containerd[1535]: time="2025-07-12T00:26:13.118728102Z" level=info msg="using legacy CRI server" Jul 12 00:26:13.118801 containerd[1535]: time="2025-07-12T00:26:13.118736639Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:26:13.118908 containerd[1535]: time="2025-07-12T00:26:13.118887737Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:26:13.119931 containerd[1535]: time="2025-07-12T00:26:13.119892767Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:26:13.120248 containerd[1535]: time="2025-07-12T00:26:13.120158921Z" level=info msg="Start subscribing containerd event" Jul 12 00:26:13.120384 containerd[1535]: time="2025-07-12T00:26:13.120281689Z" level=info msg="Start recovering state" Jul 12 00:26:13.120421 containerd[1535]: time="2025-07-12T00:26:13.120406807Z" level=info msg="Start event monitor" Jul 12 00:26:13.120458 containerd[1535]: time="2025-07-12T00:26:13.120432870Z" level=info msg="Start snapshots syncer" Jul 12 00:26:13.120458 containerd[1535]: time="2025-07-12T00:26:13.120450686Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:26:13.120501 containerd[1535]: time="2025-07-12T00:26:13.120461078Z" level=info msg="Start streaming server" Jul 12 00:26:13.121349 containerd[1535]: time="2025-07-12T00:26:13.121280450Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:26:13.121472 containerd[1535]: time="2025-07-12T00:26:13.121454601Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:26:13.121633 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:26:13.127332 containerd[1535]: time="2025-07-12T00:26:13.126502841Z" level=info msg="containerd successfully booted in 0.046994s" Jul 12 00:26:13.244821 tar[1531]: linux-arm64/LICENSE Jul 12 00:26:13.245010 tar[1531]: linux-arm64/README.md Jul 12 00:26:13.261949 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:26:13.475756 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:26:13.494804 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:26:13.505510 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:26:13.510936 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:26:13.511169 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:26:13.513504 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:26:13.526347 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:26:13.537551 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:26:13.539429 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:26:13.540402 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:26:13.853735 systemd-networkd[1234]: eth0: Gained IPv6LL Jul 12 00:26:13.856250 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:26:13.857960 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:26:13.869584 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:26:13.872391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:26:13.874494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:26:13.891513 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:26:13.891998 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:26:13.893934 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:26:13.894498 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:26:14.448347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:26:14.449583 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:26:14.450901 systemd[1]: Startup finished in 5.384s (kernel) + 3.948s (userspace) = 9.333s. Jul 12 00:26:14.452522 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:26:14.947854 kubelet[1639]: E0712 00:26:14.947797 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:26:14.950480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:26:14.950673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:26:18.912586 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:26:18.925508 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:49702.service - OpenSSH per-connection server daemon (10.0.0.1:49702). Jul 12 00:26:18.976222 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 49702 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:26:18.978049 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:26:19.002364 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:26:19.011470 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:26:19.018014 systemd-logind[1519]: New session 1 of user core. Jul 12 00:26:19.024736 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:26:19.027353 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:26:19.033203 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:19.106167 systemd[1658]: Queued start job for default target default.target. Jul 12 00:26:19.106553 systemd[1658]: Created slice app.slice - User Application Slice. Jul 12 00:26:19.106577 systemd[1658]: Reached target paths.target - Paths. Jul 12 00:26:19.106588 systemd[1658]: Reached target timers.target - Timers. Jul 12 00:26:19.118339 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:26:19.126312 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:26:19.126875 systemd[1658]: Reached target sockets.target - Sockets. Jul 12 00:26:19.126890 systemd[1658]: Reached target basic.target - Basic System. Jul 12 00:26:19.126929 systemd[1658]: Reached target default.target - Main User Target. Jul 12 00:26:19.126954 systemd[1658]: Startup finished in 88ms. Jul 12 00:26:19.127360 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:26:19.128803 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:26:19.193491 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:49716.service - OpenSSH per-connection server daemon (10.0.0.1:49716). Jul 12 00:26:19.229818 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 49716 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:26:19.231170 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:26:19.236199 systemd-logind[1519]: New session 2 of user core. Jul 12 00:26:19.246639 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:26:19.302360 sshd[1670]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:19.311534 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:49720.service - OpenSSH per-connection server daemon (10.0.0.1:49720). Jul 12 00:26:19.311914 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:49716.service: Deactivated successfully. Jul 12 00:26:19.313790 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:26:19.314405 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:26:19.316569 systemd-logind[1519]: Removed session 2. Jul 12 00:26:19.344011 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 49720 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:26:19.345434 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:26:19.353293 systemd-logind[1519]: New session 3 of user core. Jul 12 00:26:19.360538 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:26:19.411466 sshd[1675]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:19.422562 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:49722.service - OpenSSH per-connection server daemon (10.0.0.1:49722). Jul 12 00:26:19.422965 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:49720.service: Deactivated successfully. Jul 12 00:26:19.425044 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:26:19.425592 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:26:19.427644 systemd-logind[1519]: Removed session 3. Jul 12 00:26:19.456014 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 49722 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:26:19.457484 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:26:19.462044 systemd-logind[1519]: New session 4 of user core. Jul 12 00:26:19.477577 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:26:19.532658 sshd[1683]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:19.543511 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:49732.service - OpenSSH per-connection server daemon (10.0.0.1:49732). Jul 12 00:26:19.543925 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:49722.service: Deactivated successfully. Jul 12 00:26:19.545426 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:26:19.546085 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:26:19.547882 systemd-logind[1519]: Removed session 4. Jul 12 00:26:19.575730 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 49732 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:26:19.577181 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:26:19.582931 systemd-logind[1519]: New session 5 of user core. Jul 12 00:26:19.591579 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:26:19.654973 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:26:19.655283 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:26:19.670490 sudo[1698]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:19.674331 sshd[1692]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:19.683549 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:49746.service - OpenSSH per-connection server daemon (10.0.0.1:49746). Jul 12 00:26:19.684298 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:49732.service: Deactivated successfully. Jul 12 00:26:19.685932 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:26:19.686579 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:26:19.688029 systemd-logind[1519]: Removed session 5. Jul 12 00:26:19.715954 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 49746 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:26:19.717530 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:26:19.721611 systemd-logind[1519]: New session 6 of user core. Jul 12 00:26:19.733535 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:26:19.786543 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:26:19.786883 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:26:19.790839 sudo[1708]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:19.795469 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:26:19.795748 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:26:19.813511 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:26:19.815372 auditctl[1711]: No rules Jul 12 00:26:19.815850 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:26:19.816069 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:26:19.818377 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:26:19.846082 augenrules[1730]: No rules Jul 12 00:26:19.847740 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:26:19.850411 sudo[1707]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:19.852466 sshd[1700]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:19.871579 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:49760.service - OpenSSH per-connection server daemon (10.0.0.1:49760). Jul 12 00:26:19.872495 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:49746.service: Deactivated successfully. Jul 12 00:26:19.874647 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:26:19.874656 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:26:19.876877 systemd-logind[1519]: Removed session 6. Jul 12 00:26:19.903133 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 49760 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:26:19.904438 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:26:19.909293 systemd-logind[1519]: New session 7 of user core. Jul 12 00:26:19.920584 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:26:19.973741 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:26:19.974021 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:26:20.294493 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:26:20.294723 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:26:20.603262 dockerd[1762]: time="2025-07-12T00:26:20.601850758Z" level=info msg="Starting up" Jul 12 00:26:20.874744 dockerd[1762]: time="2025-07-12T00:26:20.874644594Z" level=info msg="Loading containers: start." Jul 12 00:26:20.974292 kernel: Initializing XFRM netlink socket Jul 12 00:26:21.040207 systemd-networkd[1234]: docker0: Link UP Jul 12 00:26:21.099434 dockerd[1762]: time="2025-07-12T00:26:21.099385885Z" level=info msg="Loading containers: done." Jul 12 00:26:21.115091 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1129379823-merged.mount: Deactivated successfully. Jul 12 00:26:21.116420 dockerd[1762]: time="2025-07-12T00:26:21.116386848Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:26:21.116518 dockerd[1762]: time="2025-07-12T00:26:21.116474450Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:26:21.116577 dockerd[1762]: time="2025-07-12T00:26:21.116567834Z" level=info msg="Daemon has completed initialization" Jul 12 00:26:21.144858 dockerd[1762]: time="2025-07-12T00:26:21.144644210Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:26:21.144796 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:26:21.753872 containerd[1535]: time="2025-07-12T00:26:21.753830976Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:26:22.429869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811703887.mount: Deactivated successfully. Jul 12 00:26:23.241995 containerd[1535]: time="2025-07-12T00:26:23.241937276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:23.244947 containerd[1535]: time="2025-07-12T00:26:23.244905422Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 12 00:26:23.246276 containerd[1535]: time="2025-07-12T00:26:23.245989623Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:23.250118 containerd[1535]: time="2025-07-12T00:26:23.248998741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:23.250185 containerd[1535]: time="2025-07-12T00:26:23.250136253Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.496260023s" Jul 12 00:26:23.250185 containerd[1535]: time="2025-07-12T00:26:23.250165772Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:26:23.253824 containerd[1535]: time="2025-07-12T00:26:23.253783407Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:26:24.330573 containerd[1535]: time="2025-07-12T00:26:24.330522168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:24.331681 containerd[1535]: time="2025-07-12T00:26:24.331414363Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 12 00:26:24.332654 containerd[1535]: time="2025-07-12T00:26:24.332085312Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:24.335775 containerd[1535]: time="2025-07-12T00:26:24.335732044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:24.336530 containerd[1535]: time="2025-07-12T00:26:24.336499154Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.082670996s" Jul 12 00:26:24.336592 containerd[1535]: time="2025-07-12T00:26:24.336529489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:26:24.338181 containerd[1535]: time="2025-07-12T00:26:24.338054725Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:26:25.059450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:26:25.068463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:26:25.185313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:26:25.202642 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:26:25.244180 kubelet[1984]: E0712 00:26:25.243942 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:26:25.252822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:26:25.252980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:26:25.490725 containerd[1535]: time="2025-07-12T00:26:25.490293834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:25.491488 containerd[1535]: time="2025-07-12T00:26:25.490858132Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 12 00:26:25.491949 containerd[1535]: time="2025-07-12T00:26:25.491919149Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:25.494763 containerd[1535]: time="2025-07-12T00:26:25.494715806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:25.496921 containerd[1535]: time="2025-07-12T00:26:25.496885053Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.158797303s" Jul 12 00:26:25.496989 containerd[1535]: time="2025-07-12T00:26:25.496919506Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:26:25.497606 containerd[1535]: time="2025-07-12T00:26:25.497428381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:26:26.502994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908102909.mount: Deactivated successfully. Jul 12 00:26:26.872302 containerd[1535]: time="2025-07-12T00:26:26.871979662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:26.872721 containerd[1535]: time="2025-07-12T00:26:26.872566488Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 12 00:26:26.873489 containerd[1535]: time="2025-07-12T00:26:26.873444454Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:26.875401 containerd[1535]: time="2025-07-12T00:26:26.875359167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:26.876065 containerd[1535]: time="2025-07-12T00:26:26.876022568Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.378561633s" Jul 12 00:26:26.876098 containerd[1535]: time="2025-07-12T00:26:26.876062746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:26:26.876500 containerd[1535]: time="2025-07-12T00:26:26.876474823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:26:27.470296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928343046.mount: Deactivated successfully. Jul 12 00:26:28.125351 containerd[1535]: time="2025-07-12T00:26:28.125288430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:28.126615 containerd[1535]: time="2025-07-12T00:26:28.126568235Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 00:26:28.127515 containerd[1535]: time="2025-07-12T00:26:28.127479829Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:28.131518 containerd[1535]: time="2025-07-12T00:26:28.131479794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:28.132784 containerd[1535]: time="2025-07-12T00:26:28.132746746Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.256240885s" Jul 12 00:26:28.132819 containerd[1535]: time="2025-07-12T00:26:28.132786150Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:26:28.133254 containerd[1535]: time="2025-07-12T00:26:28.133200916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:26:28.557155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107197367.mount: Deactivated successfully. Jul 12 00:26:28.561684 containerd[1535]: time="2025-07-12T00:26:28.561637290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:28.562628 containerd[1535]: time="2025-07-12T00:26:28.562597606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:26:28.563696 containerd[1535]: time="2025-07-12T00:26:28.563657456Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:28.566121 containerd[1535]: time="2025-07-12T00:26:28.566087729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:28.567245 containerd[1535]: time="2025-07-12T00:26:28.566942526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 433.69582ms" Jul 12 00:26:28.567245 containerd[1535]: time="2025-07-12T00:26:28.566975022Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:26:28.567441 containerd[1535]: time="2025-07-12T00:26:28.567394327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:26:29.065308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount391803875.mount: Deactivated successfully. Jul 12 00:26:30.593329 containerd[1535]: time="2025-07-12T00:26:30.592475283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:30.593329 containerd[1535]: time="2025-07-12T00:26:30.593091568Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 12 00:26:30.594057 containerd[1535]: time="2025-07-12T00:26:30.594015194Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:30.597566 containerd[1535]: time="2025-07-12T00:26:30.597528759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:30.599154 containerd[1535]: time="2025-07-12T00:26:30.598929707Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.031500844s" Jul 12 00:26:30.599154 containerd[1535]: time="2025-07-12T00:26:30.598965862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:26:35.125430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:26:35.142507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:26:35.172533 systemd[1]: Reloading requested from client PID 2142 ('systemctl') (unit session-7.scope)... Jul 12 00:26:35.172557 systemd[1]: Reloading... Jul 12 00:26:35.236279 zram_generator::config[2184]: No configuration found. Jul 12 00:26:35.336441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:35.390147 systemd[1]: Reloading finished in 217 ms. Jul 12 00:26:35.425623 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:26:35.425695 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:26:35.426000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:26:35.428752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:26:35.532955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:26:35.537293 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:26:35.593260 kubelet[2239]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:35.593260 kubelet[2239]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:26:35.593260 kubelet[2239]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:35.593675 kubelet[2239]: I0712 00:26:35.593324 2239 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:26:36.442973 kubelet[2239]: I0712 00:26:36.442904 2239 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:26:36.442973 kubelet[2239]: I0712 00:26:36.442941 2239 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:26:36.443262 kubelet[2239]: I0712 00:26:36.443214 2239 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:26:36.478313 kubelet[2239]: E0712 00:26:36.478263 2239 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:36.481173 kubelet[2239]: I0712 00:26:36.481140 2239 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:26:36.493764 kubelet[2239]: E0712 00:26:36.493647 2239 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:26:36.493764 kubelet[2239]: I0712 00:26:36.493685 2239 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:26:36.501442 kubelet[2239]: I0712 00:26:36.501390 2239 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:26:36.502404 kubelet[2239]: I0712 00:26:36.502385 2239 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:26:36.502660 kubelet[2239]: I0712 00:26:36.502627 2239 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:26:36.502903 kubelet[2239]: I0712 00:26:36.502719 2239 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:26:36.503157 kubelet[2239]: I0712 00:26:36.503145 2239 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:26:36.503208 kubelet[2239]: I0712 00:26:36.503201 2239 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:26:36.503516 kubelet[2239]: I0712 00:26:36.503504 2239 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:36.510867 kubelet[2239]: I0712 00:26:36.510839 2239 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:26:36.510985 kubelet[2239]: I0712 00:26:36.510976 2239 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:26:36.511069 kubelet[2239]: I0712 00:26:36.511061 2239 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:26:36.511197 kubelet[2239]: I0712 00:26:36.511187 2239 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:26:36.514599 kubelet[2239]: W0712 00:26:36.512573 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jul 12 00:26:36.514599 kubelet[2239]: E0712 00:26:36.512649 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:36.514599 kubelet[2239]: W0712 00:26:36.512730 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jul 12 00:26:36.514599 kubelet[2239]: E0712 00:26:36.512781 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:36.524601 kubelet[2239]: I0712 00:26:36.524579 2239 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:26:36.525443 kubelet[2239]: I0712 00:26:36.525429 2239 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:26:36.525689 kubelet[2239]: W0712 00:26:36.525678 2239 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:26:36.526701 kubelet[2239]: I0712 00:26:36.526684 2239 server.go:1274] "Started kubelet" Jul 12 00:26:36.527922 kubelet[2239]: I0712 00:26:36.527856 2239 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:26:36.528149 kubelet[2239]: I0712 00:26:36.528136 2239 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:26:36.528562 kubelet[2239]: I0712 00:26:36.528139 2239 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:26:36.528626 kubelet[2239]: I0712 00:26:36.528159 2239 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:26:36.528895 kubelet[2239]: I0712 00:26:36.528873 2239 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:26:36.528992 kubelet[2239]: I0712 00:26:36.528979 2239 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:26:36.529055 kubelet[2239]: I0712 00:26:36.529044 2239 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:26:36.529488 kubelet[2239]: W0712 00:26:36.529449 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jul 12 00:26:36.529549 kubelet[2239]: E0712 00:26:36.529501 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:36.529849 kubelet[2239]: I0712 00:26:36.529832 2239 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:26:36.530080 kubelet[2239]: E0712 00:26:36.529922 2239 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:26:36.530080 kubelet[2239]: E0712 00:26:36.530005 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Jul 12 00:26:36.531536 kubelet[2239]: I0712 00:26:36.531508 2239 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:26:36.531674 kubelet[2239]: I0712 00:26:36.531646 2239 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:26:36.531750 kubelet[2239]: I0712 00:26:36.531735 2239 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:26:36.533765 kubelet[2239]: I0712 00:26:36.533735 2239 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:26:36.534653 kubelet[2239]: E0712 00:26:36.532329 2239 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851595f73fe5f1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:26:36.526649117 +0000 UTC m=+0.986082666,LastTimestamp:2025-07-12 00:26:36.526649117 +0000 UTC m=+0.986082666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:26:36.534653 kubelet[2239]: E0712 00:26:36.534622 2239 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:26:36.545643 kubelet[2239]: I0712 00:26:36.545596 2239 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:26:36.546715 kubelet[2239]: I0712 00:26:36.546695 2239 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:26:36.546768 kubelet[2239]: I0712 00:26:36.546726 2239 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:26:36.546768 kubelet[2239]: I0712 00:26:36.546743 2239 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:26:36.546819 kubelet[2239]: E0712 00:26:36.546786 2239 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:26:36.547928 kubelet[2239]: W0712 00:26:36.547897 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jul 12 00:26:36.548047 kubelet[2239]: E0712 00:26:36.547939 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:36.554948 kubelet[2239]: I0712 00:26:36.554912 2239 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:26:36.554948 kubelet[2239]: I0712 00:26:36.554930 2239 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:26:36.554948 kubelet[2239]: I0712 00:26:36.554949 2239 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:36.558246 kubelet[2239]: I0712 00:26:36.558213 2239 policy_none.go:49] "None policy: Start" Jul 12 00:26:36.560458 kubelet[2239]: I0712 00:26:36.560435 2239 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:26:36.560458 kubelet[2239]: I0712 00:26:36.560461 2239 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:26:36.564544 kubelet[2239]: I0712 00:26:36.564513 2239 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:26:36.564737 kubelet[2239]: I0712 00:26:36.564699 2239 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:26:36.564773 kubelet[2239]: I0712 00:26:36.564726 2239 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:26:36.565464 kubelet[2239]: I0712 00:26:36.565447 2239 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:26:36.566150 kubelet[2239]: E0712 00:26:36.566098 2239 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:26:36.666169 kubelet[2239]: I0712 00:26:36.666126 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:26:36.666674 kubelet[2239]: E0712 00:26:36.666614 2239 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jul 12 00:26:36.731199 kubelet[2239]: I0712 00:26:36.730903 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:26:36.731199 kubelet[2239]: I0712 00:26:36.730953 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5266373b8fcc6bb8405c7389c57c042-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5266373b8fcc6bb8405c7389c57c042\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:36.731199 kubelet[2239]: I0712 00:26:36.730997 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:36.731199 kubelet[2239]: I0712 00:26:36.731023 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:36.731199 kubelet[2239]: I0712 00:26:36.731053 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:36.731440 kubelet[2239]: I0712 00:26:36.731069 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:36.731440 kubelet[2239]: I0712 00:26:36.731091 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5266373b8fcc6bb8405c7389c57c042-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5266373b8fcc6bb8405c7389c57c042\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:36.731440 kubelet[2239]: I0712 00:26:36.731106 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5266373b8fcc6bb8405c7389c57c042-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5266373b8fcc6bb8405c7389c57c042\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:36.731440 kubelet[2239]: I0712 00:26:36.731136 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:36.731440 kubelet[2239]: E0712 00:26:36.731297 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Jul 12 00:26:36.868455 kubelet[2239]: I0712 00:26:36.868413 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:26:36.868846 kubelet[2239]: E0712 00:26:36.868810 2239 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jul 12 00:26:36.951582 kubelet[2239]: E0712 00:26:36.951544 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:36.952297 containerd[1535]: time="2025-07-12T00:26:36.952184399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5266373b8fcc6bb8405c7389c57c042,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:36.953404 kubelet[2239]: E0712 00:26:36.953381 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:36.953815 containerd[1535]: time="2025-07-12T00:26:36.953776242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:36.955521 kubelet[2239]: E0712 00:26:36.955471 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:36.955899 containerd[1535]: time="2025-07-12T00:26:36.955865718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:37.132517 kubelet[2239]: E0712 00:26:37.132462 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Jul 12 00:26:37.270279 kubelet[2239]: I0712 00:26:37.270229 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:26:37.270613 kubelet[2239]: E0712 00:26:37.270586 2239 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jul 12 00:26:37.460046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248028099.mount: Deactivated successfully. Jul 12 00:26:37.466526 containerd[1535]: time="2025-07-12T00:26:37.465204570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:26:37.466526 containerd[1535]: time="2025-07-12T00:26:37.466426864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:26:37.467407 containerd[1535]: time="2025-07-12T00:26:37.467373812Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:26:37.468406 containerd[1535]: time="2025-07-12T00:26:37.468379234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:26:37.469657 containerd[1535]: time="2025-07-12T00:26:37.469627400Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:26:37.470271 containerd[1535]: time="2025-07-12T00:26:37.470248379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:26:37.472605 containerd[1535]: time="2025-07-12T00:26:37.472565567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 12 00:26:37.474405 containerd[1535]: time="2025-07-12T00:26:37.474373956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:26:37.475698 containerd[1535]: time="2025-07-12T00:26:37.475453551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.603685ms" Jul 12 00:26:37.476857 containerd[1535]: time="2025-07-12T00:26:37.476825552Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.538006ms" Jul 12 00:26:37.478219 containerd[1535]: time="2025-07-12T00:26:37.478184858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 522.248801ms" Jul 12 00:26:37.595807 kubelet[2239]: W0712 00:26:37.595749 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jul 12 00:26:37.595962 kubelet[2239]: E0712 00:26:37.595817 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:37.615998 containerd[1535]: time="2025-07-12T00:26:37.615926974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:37.616173 containerd[1535]: time="2025-07-12T00:26:37.616008837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:37.616173 containerd[1535]: time="2025-07-12T00:26:37.616035871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:37.617128 containerd[1535]: time="2025-07-12T00:26:37.616998719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:37.617128 containerd[1535]: time="2025-07-12T00:26:37.617053788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:37.617128 containerd[1535]: time="2025-07-12T00:26:37.617069608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:37.617473 containerd[1535]: time="2025-07-12T00:26:37.617201213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:37.617675 containerd[1535]: time="2025-07-12T00:26:37.617631272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:37.620227 containerd[1535]: time="2025-07-12T00:26:37.617072972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:37.620227 containerd[1535]: time="2025-07-12T00:26:37.618158854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:37.620227 containerd[1535]: time="2025-07-12T00:26:37.618173673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:37.620227 containerd[1535]: time="2025-07-12T00:26:37.618276442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:37.672547 containerd[1535]: time="2025-07-12T00:26:37.672302633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5266373b8fcc6bb8405c7389c57c042,Namespace:kube-system,Attempt:0,} returns sandbox id \"991689271f2c6e7a9cf0043a69e59b46f3c300be70aeb519c699458bc7ffe28c\"" Jul 12 00:26:37.672703 containerd[1535]: time="2025-07-12T00:26:37.672654955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"b96d51af0ae49a66f6c123ee4de32fc4a4eb672dc72574c3b583c3c84c70973d\"" Jul 12 00:26:37.672842 containerd[1535]: time="2025-07-12T00:26:37.672803341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb2372f53cf60ac5904477df27fff7a7ce3d906647b6825cbb936d49078b40be\"" Jul 12 00:26:37.674200 kubelet[2239]: E0712 00:26:37.674158 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:37.674483 kubelet[2239]: E0712 00:26:37.674348 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:37.674483 kubelet[2239]: E0712 00:26:37.674423 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:37.676705 containerd[1535]: time="2025-07-12T00:26:37.676594699Z" level=info msg="CreateContainer within sandbox \"b96d51af0ae49a66f6c123ee4de32fc4a4eb672dc72574c3b583c3c84c70973d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:26:37.677287 containerd[1535]: time="2025-07-12T00:26:37.677214436Z" level=info msg="CreateContainer within sandbox \"cb2372f53cf60ac5904477df27fff7a7ce3d906647b6825cbb936d49078b40be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:26:37.677705 containerd[1535]: time="2025-07-12T00:26:37.677584420Z" level=info msg="CreateContainer within sandbox \"991689271f2c6e7a9cf0043a69e59b46f3c300be70aeb519c699458bc7ffe28c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:26:37.691735 containerd[1535]: time="2025-07-12T00:26:37.691689599Z" level=info msg="CreateContainer within sandbox \"991689271f2c6e7a9cf0043a69e59b46f3c300be70aeb519c699458bc7ffe28c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e189095f6d95d91aaf0131fac24ce389286aeac347c2cc2900fec7d0a7bdfc0e\"" Jul 12 00:26:37.692686 containerd[1535]: time="2025-07-12T00:26:37.692658615Z" level=info msg="StartContainer for \"e189095f6d95d91aaf0131fac24ce389286aeac347c2cc2900fec7d0a7bdfc0e\"" Jul 12 00:26:37.694996 containerd[1535]: time="2025-07-12T00:26:37.694949330Z" level=info msg="CreateContainer within sandbox \"cb2372f53cf60ac5904477df27fff7a7ce3d906647b6825cbb936d49078b40be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"739c66a1d63463c45e072d5c67915a27a90cccf727e30bdb7c8f9d9f531356f0\"" Jul 12 00:26:37.696008 containerd[1535]: time="2025-07-12T00:26:37.695685293Z" level=info msg="StartContainer for \"739c66a1d63463c45e072d5c67915a27a90cccf727e30bdb7c8f9d9f531356f0\"" Jul 12 00:26:37.696232 containerd[1535]: time="2025-07-12T00:26:37.696193891Z" level=info msg="CreateContainer within sandbox \"b96d51af0ae49a66f6c123ee4de32fc4a4eb672dc72574c3b583c3c84c70973d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f93a9f835f21f96501b8f5d3a6ba830cb697f5902ef59493f3321893ec154d6\"" Jul 12 00:26:37.696651 containerd[1535]: time="2025-07-12T00:26:37.696622189Z" level=info msg="StartContainer for \"5f93a9f835f21f96501b8f5d3a6ba830cb697f5902ef59493f3321893ec154d6\"" Jul 12 00:26:37.713953 kubelet[2239]: W0712 00:26:37.713819 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jul 12 00:26:37.713953 kubelet[2239]: E0712 00:26:37.713888 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:37.760196 containerd[1535]: time="2025-07-12T00:26:37.760104445Z" level=info msg="StartContainer for \"e189095f6d95d91aaf0131fac24ce389286aeac347c2cc2900fec7d0a7bdfc0e\" returns successfully" Jul 12 00:26:37.768413 containerd[1535]: time="2025-07-12T00:26:37.768325801Z" level=info msg="StartContainer for \"5f93a9f835f21f96501b8f5d3a6ba830cb697f5902ef59493f3321893ec154d6\" returns successfully" Jul 12 00:26:37.773635 containerd[1535]: time="2025-07-12T00:26:37.773304328Z" level=info msg="StartContainer for \"739c66a1d63463c45e072d5c67915a27a90cccf727e30bdb7c8f9d9f531356f0\" returns successfully" Jul 12 00:26:37.916821 kubelet[2239]: W0712 00:26:37.916754 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Jul 12 00:26:37.917009 kubelet[2239]: E0712 00:26:37.916960 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:38.072520 kubelet[2239]: I0712 00:26:38.072448 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:26:38.559784 kubelet[2239]: E0712 00:26:38.559755 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:38.562454 kubelet[2239]: E0712 00:26:38.562422 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:38.565887 kubelet[2239]: E0712 00:26:38.565803 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:39.092417 kubelet[2239]: E0712 00:26:39.092364 2239 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:26:39.259686 kubelet[2239]: I0712 00:26:39.259439 2239 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:26:39.259686 kubelet[2239]: E0712 00:26:39.259487 2239 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:26:39.513893 kubelet[2239]: I0712 00:26:39.513779 2239 apiserver.go:52] "Watching apiserver" Jul 12 00:26:39.529957 kubelet[2239]: I0712 00:26:39.529922 2239 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:26:39.571696 kubelet[2239]: E0712 00:26:39.571652 2239 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:39.573254 kubelet[2239]: E0712 00:26:39.571840 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:40.575595 kubelet[2239]: E0712 00:26:40.575514 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:41.008371 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-7.scope)... Jul 12 00:26:41.008388 systemd[1]: Reloading... Jul 12 00:26:41.066362 zram_generator::config[2558]: No configuration found. Jul 12 00:26:41.154190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:41.213009 systemd[1]: Reloading finished in 204 ms. Jul 12 00:26:41.238555 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:26:41.251231 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:26:41.251593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:26:41.263468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:26:41.359102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:26:41.363562 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:26:41.400076 kubelet[2607]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:41.400076 kubelet[2607]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:26:41.400076 kubelet[2607]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:41.400509 kubelet[2607]: I0712 00:26:41.400137 2607 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:26:41.407980 kubelet[2607]: I0712 00:26:41.407943 2607 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:26:41.407980 kubelet[2607]: I0712 00:26:41.407979 2607 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:26:41.408251 kubelet[2607]: I0712 00:26:41.408222 2607 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:26:41.409620 kubelet[2607]: I0712 00:26:41.409603 2607 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:26:41.411687 kubelet[2607]: I0712 00:26:41.411649 2607 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:26:41.418067 kubelet[2607]: E0712 00:26:41.418028 2607 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:26:41.418067 kubelet[2607]: I0712 00:26:41.418066 2607 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:26:41.421708 kubelet[2607]: I0712 00:26:41.421655 2607 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:26:41.422577 kubelet[2607]: I0712 00:26:41.422553 2607 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:26:41.422703 kubelet[2607]: I0712 00:26:41.422670 2607 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:26:41.422884 kubelet[2607]: I0712 00:26:41.422706 2607 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:26:41.422972 kubelet[2607]: I0712 00:26:41.422894 2607 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:26:41.422972 kubelet[2607]: I0712 00:26:41.422904 2607 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:26:41.422972 kubelet[2607]: I0712 00:26:41.422942 2607 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:41.423052 kubelet[2607]: I0712 00:26:41.423041 2607 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:26:41.423078 kubelet[2607]: I0712 00:26:41.423056 2607 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:26:41.423102 kubelet[2607]: I0712 00:26:41.423079 2607 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:26:41.423102 kubelet[2607]: I0712 00:26:41.423096 2607 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:26:41.424005 kubelet[2607]: I0712 00:26:41.423973 2607 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:26:41.426294 kubelet[2607]: I0712 00:26:41.426256 2607 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:26:41.426729 kubelet[2607]: I0712 00:26:41.426709 2607 server.go:1274] "Started kubelet" Jul 12 00:26:41.427072 kubelet[2607]: I0712 00:26:41.427015 2607 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:26:41.427290 kubelet[2607]: I0712 00:26:41.427273 2607 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:26:41.428726 kubelet[2607]: I0712 00:26:41.426800 2607 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:26:41.431617 kubelet[2607]: I0712 00:26:41.431590 2607 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:26:41.434608 kubelet[2607]: I0712 00:26:41.433878 2607 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:26:41.435070 kubelet[2607]: I0712 00:26:41.435052 2607 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:26:41.446974 kubelet[2607]: I0712 00:26:41.443940 2607 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:26:41.446974 kubelet[2607]: E0712 00:26:41.444512 2607 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:26:41.446974 kubelet[2607]: I0712 00:26:41.445199 2607 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:26:41.446974 kubelet[2607]: I0712 00:26:41.445775 2607 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:26:41.446974 kubelet[2607]: I0712 00:26:41.445948 2607 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:26:41.446974 kubelet[2607]: I0712 00:26:41.446054 2607 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:26:41.448617 kubelet[2607]: I0712 00:26:41.448283 2607 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:26:41.450556 kubelet[2607]: I0712 00:26:41.450518 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:26:41.451920 kubelet[2607]: I0712 00:26:41.451899 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:26:41.451991 kubelet[2607]: I0712 00:26:41.451925 2607 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:26:41.451991 kubelet[2607]: I0712 00:26:41.451945 2607 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:26:41.451991 kubelet[2607]: E0712 00:26:41.451983 2607 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:26:41.492864 kubelet[2607]: I0712 00:26:41.492821 2607 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:26:41.492864 kubelet[2607]: I0712 00:26:41.492853 2607 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:26:41.492864 kubelet[2607]: I0712 00:26:41.492876 2607 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:41.493040 kubelet[2607]: I0712 00:26:41.493023 2607 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:26:41.493065 kubelet[2607]: I0712 00:26:41.493033 2607 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:26:41.493065 kubelet[2607]: I0712 00:26:41.493052 2607 policy_none.go:49] "None policy: Start" Jul 12 00:26:41.493604 kubelet[2607]: I0712 00:26:41.493587 2607 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:26:41.493650 kubelet[2607]: I0712 00:26:41.493611 2607 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:26:41.493759 kubelet[2607]: I0712 00:26:41.493746 2607 state_mem.go:75] "Updated machine memory state" Jul 12 00:26:41.496021 kubelet[2607]: I0712 00:26:41.494982 2607 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:26:41.496021 kubelet[2607]: I0712 00:26:41.495157 2607 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:26:41.496021 kubelet[2607]: I0712 00:26:41.495168 2607 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:26:41.496021 kubelet[2607]: I0712 00:26:41.495364 2607 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:26:41.560760 kubelet[2607]: E0712 00:26:41.560730 2607 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:41.601798 kubelet[2607]: I0712 00:26:41.601765 2607 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:26:41.608088 kubelet[2607]: I0712 00:26:41.608047 2607 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 00:26:41.608181 kubelet[2607]: I0712 00:26:41.608172 2607 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:26:41.647011 kubelet[2607]: I0712 00:26:41.646965 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5266373b8fcc6bb8405c7389c57c042-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5266373b8fcc6bb8405c7389c57c042\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:41.647011 kubelet[2607]: I0712 00:26:41.647006 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:41.647168 kubelet[2607]: I0712 00:26:41.647029 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:41.647168 kubelet[2607]: I0712 00:26:41.647044 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5266373b8fcc6bb8405c7389c57c042-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5266373b8fcc6bb8405c7389c57c042\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:41.647168 kubelet[2607]: I0712 00:26:41.647061 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5266373b8fcc6bb8405c7389c57c042-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5266373b8fcc6bb8405c7389c57c042\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:26:41.647168 kubelet[2607]: I0712 00:26:41.647078 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:41.647168 kubelet[2607]: I0712 00:26:41.647094 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:41.647378 kubelet[2607]: I0712 00:26:41.647111 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:26:41.647378 kubelet[2607]: I0712 00:26:41.647126 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:26:41.860822 kubelet[2607]: E0712 00:26:41.860486 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:41.860905 kubelet[2607]: E0712 00:26:41.860841 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:41.861231 kubelet[2607]: E0712 00:26:41.861066 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:42.425304 kubelet[2607]: I0712 00:26:42.424196 2607 apiserver.go:52] "Watching apiserver" Jul 12 00:26:42.446386 kubelet[2607]: I0712 00:26:42.446353 2607 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:26:42.465384 kubelet[2607]: E0712 00:26:42.465355 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:42.465534 kubelet[2607]: E0712 00:26:42.465465 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:42.465729 kubelet[2607]: E0712 00:26:42.465714 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:42.494660 kubelet[2607]: I0712 00:26:42.494212 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.494197695 podStartE2EDuration="2.494197695s" podCreationTimestamp="2025-07-12 00:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:42.494049399 +0000 UTC m=+1.126601697" watchObservedRunningTime="2025-07-12 00:26:42.494197695 +0000 UTC m=+1.126749993" Jul 12 00:26:42.494660 kubelet[2607]: I0712 00:26:42.494327 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.494322775 podStartE2EDuration="1.494322775s" podCreationTimestamp="2025-07-12 00:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:42.486495014 +0000 UTC m=+1.119047312" watchObservedRunningTime="2025-07-12 00:26:42.494322775 +0000 UTC m=+1.126875033" Jul 12 00:26:43.466893 kubelet[2607]: E0712 00:26:43.466547 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:45.386992 kubelet[2607]: E0712 00:26:45.386944 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:48.384943 kubelet[2607]: I0712 00:26:48.384911 2607 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:26:48.385783 containerd[1535]: time="2025-07-12T00:26:48.385746993Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:26:48.386270 kubelet[2607]: I0712 00:26:48.385921 2607 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:26:48.780280 kubelet[2607]: E0712 00:26:48.779436 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:48.793562 kubelet[2607]: I0712 00:26:48.793374 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.793361319 podStartE2EDuration="7.793361319s" podCreationTimestamp="2025-07-12 00:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:42.500568157 +0000 UTC m=+1.133120495" watchObservedRunningTime="2025-07-12 00:26:48.793361319 +0000 UTC m=+7.425913577" Jul 12 00:26:48.992944 kubelet[2607]: I0712 00:26:48.992896 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/866133ac-324b-42c3-9658-b61f05516547-kube-proxy\") pod \"kube-proxy-xgh4r\" (UID: \"866133ac-324b-42c3-9658-b61f05516547\") " pod="kube-system/kube-proxy-xgh4r" Jul 12 00:26:48.992944 kubelet[2607]: I0712 00:26:48.992934 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/866133ac-324b-42c3-9658-b61f05516547-xtables-lock\") pod \"kube-proxy-xgh4r\" (UID: \"866133ac-324b-42c3-9658-b61f05516547\") " pod="kube-system/kube-proxy-xgh4r" Jul 12 00:26:48.993076 kubelet[2607]: I0712 00:26:48.992957 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj27c\" (UniqueName: \"kubernetes.io/projected/866133ac-324b-42c3-9658-b61f05516547-kube-api-access-cj27c\") pod \"kube-proxy-xgh4r\" (UID: \"866133ac-324b-42c3-9658-b61f05516547\") " pod="kube-system/kube-proxy-xgh4r" Jul 12 00:26:48.993076 kubelet[2607]: I0712 00:26:48.992977 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/866133ac-324b-42c3-9658-b61f05516547-lib-modules\") pod \"kube-proxy-xgh4r\" (UID: \"866133ac-324b-42c3-9658-b61f05516547\") " pod="kube-system/kube-proxy-xgh4r" Jul 12 00:26:49.100708 kubelet[2607]: E0712 00:26:49.100673 2607 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 12 00:26:49.100708 kubelet[2607]: E0712 00:26:49.100703 2607 projected.go:194] Error preparing data for projected volume kube-api-access-cj27c for pod kube-system/kube-proxy-xgh4r: configmap "kube-root-ca.crt" not found Jul 12 00:26:49.100834 kubelet[2607]: E0712 00:26:49.100749 2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/866133ac-324b-42c3-9658-b61f05516547-kube-api-access-cj27c podName:866133ac-324b-42c3-9658-b61f05516547 nodeName:}" failed. No retries permitted until 2025-07-12 00:26:49.600730917 +0000 UTC m=+8.233283215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cj27c" (UniqueName: "kubernetes.io/projected/866133ac-324b-42c3-9658-b61f05516547-kube-api-access-cj27c") pod "kube-proxy-xgh4r" (UID: "866133ac-324b-42c3-9658-b61f05516547") : configmap "kube-root-ca.crt" not found Jul 12 00:26:49.479080 kubelet[2607]: E0712 00:26:49.478969 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:49.596510 kubelet[2607]: I0712 00:26:49.596428 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpwp\" (UniqueName: \"kubernetes.io/projected/790b5bee-2a40-456f-9119-13da91e88e4a-kube-api-access-wvpwp\") pod \"tigera-operator-5bf8dfcb4-n5wx8\" (UID: \"790b5bee-2a40-456f-9119-13da91e88e4a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-n5wx8" Jul 12 00:26:49.596510 kubelet[2607]: I0712 00:26:49.596475 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/790b5bee-2a40-456f-9119-13da91e88e4a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-n5wx8\" (UID: \"790b5bee-2a40-456f-9119-13da91e88e4a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-n5wx8" Jul 12 00:26:49.798144 containerd[1535]: time="2025-07-12T00:26:49.798009258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-n5wx8,Uid:790b5bee-2a40-456f-9119-13da91e88e4a,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:26:49.816811 containerd[1535]: time="2025-07-12T00:26:49.816705570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:49.816811 containerd[1535]: time="2025-07-12T00:26:49.816777036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:49.816811 containerd[1535]: time="2025-07-12T00:26:49.816795603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:49.817303 containerd[1535]: time="2025-07-12T00:26:49.817227165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:49.858148 containerd[1535]: time="2025-07-12T00:26:49.858074239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-n5wx8,Uid:790b5bee-2a40-456f-9119-13da91e88e4a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"46a5516893d8288634efe467b83488ad192731c5d9df660844583877aa7b01ed\"" Jul 12 00:26:49.859648 containerd[1535]: time="2025-07-12T00:26:49.859456716Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:26:49.878434 kubelet[2607]: E0712 00:26:49.878402 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:49.878811 containerd[1535]: time="2025-07-12T00:26:49.878779262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgh4r,Uid:866133ac-324b-42c3-9658-b61f05516547,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:49.895727 containerd[1535]: time="2025-07-12T00:26:49.895303521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:49.895727 containerd[1535]: time="2025-07-12T00:26:49.895686944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:49.895727 containerd[1535]: time="2025-07-12T00:26:49.895699949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:49.895879 containerd[1535]: time="2025-07-12T00:26:49.895791783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:49.926595 containerd[1535]: time="2025-07-12T00:26:49.926553606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgh4r,Uid:866133ac-324b-42c3-9658-b61f05516547,Namespace:kube-system,Attempt:0,} returns sandbox id \"f652c48a200777326a68c4d5a3f63764b6d89e1644a602cef9cfbe7755d4cdd8\"" Jul 12 00:26:49.927048 kubelet[2607]: E0712 00:26:49.927019 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:49.929029 containerd[1535]: time="2025-07-12T00:26:49.928976713Z" level=info msg="CreateContainer within sandbox \"f652c48a200777326a68c4d5a3f63764b6d89e1644a602cef9cfbe7755d4cdd8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:26:49.943205 containerd[1535]: time="2025-07-12T00:26:49.943161697Z" level=info msg="CreateContainer within sandbox \"f652c48a200777326a68c4d5a3f63764b6d89e1644a602cef9cfbe7755d4cdd8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80f4dbc517a320aa6d3d6d0718886fcb1a8a73fb3ac6212ab889a4f89d56994a\"" Jul 12 00:26:49.943621 containerd[1535]: time="2025-07-12T00:26:49.943594819Z" level=info msg="StartContainer for \"80f4dbc517a320aa6d3d6d0718886fcb1a8a73fb3ac6212ab889a4f89d56994a\"" Jul 12 00:26:49.992570 containerd[1535]: time="2025-07-12T00:26:49.991929533Z" level=info msg="StartContainer for \"80f4dbc517a320aa6d3d6d0718886fcb1a8a73fb3ac6212ab889a4f89d56994a\" returns successfully" Jul 12 00:26:50.482007 kubelet[2607]: E0712 00:26:50.481419 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:50.491290 kubelet[2607]: I0712 00:26:50.491212 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xgh4r" podStartSLOduration=2.491195997 podStartE2EDuration="2.491195997s" podCreationTimestamp="2025-07-12 00:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:50.490625475 +0000 UTC m=+9.123177773" watchObservedRunningTime="2025-07-12 00:26:50.491195997 +0000 UTC m=+9.123748295" Jul 12 00:26:50.857556 kubelet[2607]: E0712 00:26:50.857226 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:51.102053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3026074433.mount: Deactivated successfully. Jul 12 00:26:51.484764 kubelet[2607]: E0712 00:26:51.484720 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:51.556292 containerd[1535]: time="2025-07-12T00:26:51.556231574Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:51.557218 containerd[1535]: time="2025-07-12T00:26:51.557015757Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:26:51.557909 containerd[1535]: time="2025-07-12T00:26:51.557874365Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:51.561585 containerd[1535]: time="2025-07-12T00:26:51.561552197Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:26:51.563185 containerd[1535]: time="2025-07-12T00:26:51.562892726Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.703402238s" Jul 12 00:26:51.563185 containerd[1535]: time="2025-07-12T00:26:51.562932779Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:26:51.566318 containerd[1535]: time="2025-07-12T00:26:51.566223842Z" level=info msg="CreateContainer within sandbox \"46a5516893d8288634efe467b83488ad192731c5d9df660844583877aa7b01ed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:26:51.594172 containerd[1535]: time="2025-07-12T00:26:51.594120547Z" level=info msg="CreateContainer within sandbox \"46a5516893d8288634efe467b83488ad192731c5d9df660844583877aa7b01ed\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bed3a89a04c18d80b514796b9f964e83b29bece598bc3723d36fa85596754471\"" Jul 12 00:26:51.594882 containerd[1535]: time="2025-07-12T00:26:51.594852072Z" level=info msg="StartContainer for \"bed3a89a04c18d80b514796b9f964e83b29bece598bc3723d36fa85596754471\"" Jul 12 00:26:51.634937 containerd[1535]: time="2025-07-12T00:26:51.634901207Z" level=info msg="StartContainer for \"bed3a89a04c18d80b514796b9f964e83b29bece598bc3723d36fa85596754471\" returns successfully" Jul 12 00:26:52.495395 kubelet[2607]: I0712 00:26:52.495309 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-n5wx8" podStartSLOduration=1.79070851 podStartE2EDuration="3.495293998s" podCreationTimestamp="2025-07-12 00:26:49 +0000 UTC" firstStartedPulling="2025-07-12 00:26:49.859111347 +0000 UTC m=+8.491663645" lastFinishedPulling="2025-07-12 00:26:51.563696835 +0000 UTC m=+10.196249133" observedRunningTime="2025-07-12 00:26:52.494979178 +0000 UTC m=+11.127531476" watchObservedRunningTime="2025-07-12 00:26:52.495293998 +0000 UTC m=+11.127846256" Jul 12 00:26:55.399516 kubelet[2607]: E0712 00:26:55.399485 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:55.493628 kubelet[2607]: E0712 00:26:55.493588 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:26:56.831364 sudo[1743]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:56.836024 sshd[1736]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:56.840840 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:49760.service: Deactivated successfully. Jul 12 00:26:56.846521 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:26:56.847493 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:26:56.851848 systemd-logind[1519]: Removed session 7. Jul 12 00:26:58.119371 update_engine[1522]: I20250712 00:26:58.119282 1522 update_attempter.cc:509] Updating boot flags... Jul 12 00:26:58.175332 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3010) Jul 12 00:26:58.236283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3009) Jul 12 00:27:03.013661 kubelet[2607]: I0712 00:27:03.013550 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae508d17-2d9c-480c-95e6-5e82e520b7c6-tigera-ca-bundle\") pod \"calico-typha-5c7486f44c-chdqt\" (UID: \"ae508d17-2d9c-480c-95e6-5e82e520b7c6\") " pod="calico-system/calico-typha-5c7486f44c-chdqt" Jul 12 00:27:03.013661 kubelet[2607]: I0712 00:27:03.013606 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st62f\" (UniqueName: \"kubernetes.io/projected/ae508d17-2d9c-480c-95e6-5e82e520b7c6-kube-api-access-st62f\") pod \"calico-typha-5c7486f44c-chdqt\" (UID: \"ae508d17-2d9c-480c-95e6-5e82e520b7c6\") " pod="calico-system/calico-typha-5c7486f44c-chdqt" Jul 12 00:27:03.014188 kubelet[2607]: I0712 00:27:03.013629 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ae508d17-2d9c-480c-95e6-5e82e520b7c6-typha-certs\") pod \"calico-typha-5c7486f44c-chdqt\" (UID: \"ae508d17-2d9c-480c-95e6-5e82e520b7c6\") " pod="calico-system/calico-typha-5c7486f44c-chdqt" Jul 12 00:27:03.303490 kubelet[2607]: E0712 00:27:03.303072 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:03.304104 containerd[1535]: time="2025-07-12T00:27:03.303777086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c7486f44c-chdqt,Uid:ae508d17-2d9c-480c-95e6-5e82e520b7c6,Namespace:calico-system,Attempt:0,}" Jul 12 00:27:03.341080 containerd[1535]: time="2025-07-12T00:27:03.340176290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:03.341195 containerd[1535]: time="2025-07-12T00:27:03.341121382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:03.341195 containerd[1535]: time="2025-07-12T00:27:03.341140586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:03.341355 containerd[1535]: time="2025-07-12T00:27:03.341316418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:03.389929 containerd[1535]: time="2025-07-12T00:27:03.389889683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c7486f44c-chdqt,Uid:ae508d17-2d9c-480c-95e6-5e82e520b7c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"fba99ab7c4819dc66ff7c30d67b7732d1ed1db3f9b5a2aa7a9132ff261343ac7\"" Jul 12 00:27:03.390891 kubelet[2607]: E0712 00:27:03.390869 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:03.392584 containerd[1535]: time="2025-07-12T00:27:03.392539847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:27:03.419219 kubelet[2607]: I0712 00:27:03.419144 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-var-run-calico\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420387 kubelet[2607]: I0712 00:27:03.419730 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3396bcfa-b496-41f8-9a37-caa68225e994-node-certs\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420387 kubelet[2607]: I0712 00:27:03.419757 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3396bcfa-b496-41f8-9a37-caa68225e994-tigera-ca-bundle\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420387 kubelet[2607]: I0712 00:27:03.419776 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-flexvol-driver-host\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420387 kubelet[2607]: I0712 00:27:03.419795 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-cni-net-dir\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420387 kubelet[2607]: I0712 00:27:03.419812 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-var-lib-calico\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420937 kubelet[2607]: I0712 00:27:03.419829 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4hfh\" (UniqueName: \"kubernetes.io/projected/3396bcfa-b496-41f8-9a37-caa68225e994-kube-api-access-l4hfh\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420937 kubelet[2607]: I0712 00:27:03.419844 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-policysync\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420937 kubelet[2607]: I0712 00:27:03.419858 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-cni-bin-dir\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420937 kubelet[2607]: I0712 00:27:03.419873 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-lib-modules\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.420937 kubelet[2607]: I0712 00:27:03.419887 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-xtables-lock\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.422285 kubelet[2607]: I0712 00:27:03.419902 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3396bcfa-b496-41f8-9a37-caa68225e994-cni-log-dir\") pod \"calico-node-7k78z\" (UID: \"3396bcfa-b496-41f8-9a37-caa68225e994\") " pod="calico-system/calico-node-7k78z" Jul 12 00:27:03.531175 kubelet[2607]: E0712 00:27:03.531079 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.531175 kubelet[2607]: W0712 00:27:03.531108 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.531175 kubelet[2607]: E0712 00:27:03.531135 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.630081 kubelet[2607]: E0712 00:27:03.629884 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8ld7" podUID="37964168-0f35-42e8-bcb1-d8b1fcfa1415" Jul 12 00:27:03.634772 containerd[1535]: time="2025-07-12T00:27:03.634062408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7k78z,Uid:3396bcfa-b496-41f8-9a37-caa68225e994,Namespace:calico-system,Attempt:0,}" Jul 12 00:27:03.661775 containerd[1535]: time="2025-07-12T00:27:03.661635040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:03.661775 containerd[1535]: time="2025-07-12T00:27:03.661701972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:03.661775 containerd[1535]: time="2025-07-12T00:27:03.661719776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:03.662588 containerd[1535]: time="2025-07-12T00:27:03.662442227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:03.698156 containerd[1535]: time="2025-07-12T00:27:03.698122299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7k78z,Uid:3396bcfa-b496-41f8-9a37-caa68225e994,Namespace:calico-system,Attempt:0,} returns sandbox id \"212d16a3794ad17b03a358c6538f9152a6b3ac500478a33e4e4a918aecfde6ab\"" Jul 12 00:27:03.707333 kubelet[2607]: E0712 00:27:03.707297 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.707333 kubelet[2607]: W0712 00:27:03.707321 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.707436 kubelet[2607]: E0712 00:27:03.707340 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.707579 kubelet[2607]: E0712 00:27:03.707555 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.707579 kubelet[2607]: W0712 00:27:03.707568 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.707579 kubelet[2607]: E0712 00:27:03.707577 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.707764 kubelet[2607]: E0712 00:27:03.707744 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.707764 kubelet[2607]: W0712 00:27:03.707756 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.707816 kubelet[2607]: E0712 00:27:03.707765 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.707957 kubelet[2607]: E0712 00:27:03.707937 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.707957 kubelet[2607]: W0712 00:27:03.707950 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.708008 kubelet[2607]: E0712 00:27:03.707958 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.708187 kubelet[2607]: E0712 00:27:03.708167 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.708187 kubelet[2607]: W0712 00:27:03.708181 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.708247 kubelet[2607]: E0712 00:27:03.708190 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.708371 kubelet[2607]: E0712 00:27:03.708360 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.708397 kubelet[2607]: W0712 00:27:03.708370 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.708397 kubelet[2607]: E0712 00:27:03.708378 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.708532 kubelet[2607]: E0712 00:27:03.708521 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.708558 kubelet[2607]: W0712 00:27:03.708532 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.708558 kubelet[2607]: E0712 00:27:03.708540 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.708691 kubelet[2607]: E0712 00:27:03.708682 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.708717 kubelet[2607]: W0712 00:27:03.708691 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.708717 kubelet[2607]: E0712 00:27:03.708698 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.708855 kubelet[2607]: E0712 00:27:03.708843 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.708880 kubelet[2607]: W0712 00:27:03.708855 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.708880 kubelet[2607]: E0712 00:27:03.708864 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.708996 kubelet[2607]: E0712 00:27:03.708987 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.709020 kubelet[2607]: W0712 00:27:03.708995 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.709020 kubelet[2607]: E0712 00:27:03.709002 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.709134 kubelet[2607]: E0712 00:27:03.709125 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.709161 kubelet[2607]: W0712 00:27:03.709135 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.709161 kubelet[2607]: E0712 00:27:03.709142 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.709312 kubelet[2607]: E0712 00:27:03.709295 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.709312 kubelet[2607]: W0712 00:27:03.709304 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.709312 kubelet[2607]: E0712 00:27:03.709312 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.709483 kubelet[2607]: E0712 00:27:03.709470 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.709483 kubelet[2607]: W0712 00:27:03.709481 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.709537 kubelet[2607]: E0712 00:27:03.709489 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.709626 kubelet[2607]: E0712 00:27:03.709617 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.709652 kubelet[2607]: W0712 00:27:03.709626 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.709652 kubelet[2607]: E0712 00:27:03.709633 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.709759 kubelet[2607]: E0712 00:27:03.709751 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.709784 kubelet[2607]: W0712 00:27:03.709759 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.709784 kubelet[2607]: E0712 00:27:03.709766 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.709906 kubelet[2607]: E0712 00:27:03.709898 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.709931 kubelet[2607]: W0712 00:27:03.709906 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.709931 kubelet[2607]: E0712 00:27:03.709913 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.710095 kubelet[2607]: E0712 00:27:03.710084 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.710095 kubelet[2607]: W0712 00:27:03.710093 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.710152 kubelet[2607]: E0712 00:27:03.710100 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.710245 kubelet[2607]: E0712 00:27:03.710227 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.710272 kubelet[2607]: W0712 00:27:03.710247 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.710272 kubelet[2607]: E0712 00:27:03.710255 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.710393 kubelet[2607]: E0712 00:27:03.710384 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.710416 kubelet[2607]: W0712 00:27:03.710393 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.710416 kubelet[2607]: E0712 00:27:03.710400 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.710549 kubelet[2607]: E0712 00:27:03.710540 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.710578 kubelet[2607]: W0712 00:27:03.710549 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.710578 kubelet[2607]: E0712 00:27:03.710556 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.721847 kubelet[2607]: E0712 00:27:03.721821 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.721847 kubelet[2607]: W0712 00:27:03.721838 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.721847 kubelet[2607]: E0712 00:27:03.721851 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.721951 kubelet[2607]: I0712 00:27:03.721870 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/37964168-0f35-42e8-bcb1-d8b1fcfa1415-varrun\") pod \"csi-node-driver-g8ld7\" (UID: \"37964168-0f35-42e8-bcb1-d8b1fcfa1415\") " pod="calico-system/csi-node-driver-g8ld7" Jul 12 00:27:03.722107 kubelet[2607]: E0712 00:27:03.722084 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.722162 kubelet[2607]: W0712 00:27:03.722097 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.722162 kubelet[2607]: E0712 00:27:03.722121 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.722162 kubelet[2607]: I0712 00:27:03.722137 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdps\" (UniqueName: \"kubernetes.io/projected/37964168-0f35-42e8-bcb1-d8b1fcfa1415-kube-api-access-xgdps\") pod \"csi-node-driver-g8ld7\" (UID: \"37964168-0f35-42e8-bcb1-d8b1fcfa1415\") " pod="calico-system/csi-node-driver-g8ld7" Jul 12 00:27:03.722430 kubelet[2607]: E0712 00:27:03.722402 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.722430 kubelet[2607]: W0712 00:27:03.722423 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.722498 kubelet[2607]: E0712 00:27:03.722442 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.722651 kubelet[2607]: E0712 00:27:03.722630 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.722651 kubelet[2607]: W0712 00:27:03.722644 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.722741 kubelet[2607]: E0712 00:27:03.722655 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.722854 kubelet[2607]: E0712 00:27:03.722840 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.722884 kubelet[2607]: W0712 00:27:03.722854 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.722884 kubelet[2607]: E0712 00:27:03.722867 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.722934 kubelet[2607]: I0712 00:27:03.722886 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37964168-0f35-42e8-bcb1-d8b1fcfa1415-socket-dir\") pod \"csi-node-driver-g8ld7\" (UID: \"37964168-0f35-42e8-bcb1-d8b1fcfa1415\") " pod="calico-system/csi-node-driver-g8ld7" Jul 12 00:27:03.723114 kubelet[2607]: E0712 00:27:03.723100 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.723136 kubelet[2607]: W0712 00:27:03.723117 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.723136 kubelet[2607]: E0712 00:27:03.723134 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.723178 kubelet[2607]: I0712 00:27:03.723150 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37964168-0f35-42e8-bcb1-d8b1fcfa1415-kubelet-dir\") pod \"csi-node-driver-g8ld7\" (UID: \"37964168-0f35-42e8-bcb1-d8b1fcfa1415\") " pod="calico-system/csi-node-driver-g8ld7" Jul 12 00:27:03.723424 kubelet[2607]: E0712 00:27:03.723408 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.723424 kubelet[2607]: W0712 00:27:03.723422 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.723521 kubelet[2607]: E0712 00:27:03.723506 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.723551 kubelet[2607]: I0712 00:27:03.723531 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37964168-0f35-42e8-bcb1-d8b1fcfa1415-registration-dir\") pod \"csi-node-driver-g8ld7\" (UID: \"37964168-0f35-42e8-bcb1-d8b1fcfa1415\") " pod="calico-system/csi-node-driver-g8ld7" Jul 12 00:27:03.723766 kubelet[2607]: E0712 00:27:03.723740 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.723766 kubelet[2607]: W0712 00:27:03.723754 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.723856 kubelet[2607]: E0712 00:27:03.723834 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.723936 kubelet[2607]: E0712 00:27:03.723925 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.723961 kubelet[2607]: W0712 00:27:03.723937 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.723961 kubelet[2607]: E0712 00:27:03.723954 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.724137 kubelet[2607]: E0712 00:27:03.724127 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.724159 kubelet[2607]: W0712 00:27:03.724138 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.724159 kubelet[2607]: E0712 00:27:03.724150 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.724313 kubelet[2607]: E0712 00:27:03.724301 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.724343 kubelet[2607]: W0712 00:27:03.724313 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.724343 kubelet[2607]: E0712 00:27:03.724326 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.724602 kubelet[2607]: E0712 00:27:03.724589 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.724625 kubelet[2607]: W0712 00:27:03.724602 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.724625 kubelet[2607]: E0712 00:27:03.724611 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.724786 kubelet[2607]: E0712 00:27:03.724775 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.724810 kubelet[2607]: W0712 00:27:03.724787 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.724810 kubelet[2607]: E0712 00:27:03.724796 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.724960 kubelet[2607]: E0712 00:27:03.724949 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.724984 kubelet[2607]: W0712 00:27:03.724960 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.724984 kubelet[2607]: E0712 00:27:03.724967 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.725132 kubelet[2607]: E0712 00:27:03.725121 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.725156 kubelet[2607]: W0712 00:27:03.725132 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.725156 kubelet[2607]: E0712 00:27:03.725140 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.825004 kubelet[2607]: E0712 00:27:03.824975 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.825004 kubelet[2607]: W0712 00:27:03.824997 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.825004 kubelet[2607]: E0712 00:27:03.825017 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.825299 kubelet[2607]: E0712 00:27:03.825271 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.825340 kubelet[2607]: W0712 00:27:03.825300 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.825340 kubelet[2607]: E0712 00:27:03.825320 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.825892 kubelet[2607]: E0712 00:27:03.825876 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.825932 kubelet[2607]: W0712 00:27:03.825906 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.825964 kubelet[2607]: E0712 00:27:03.825932 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.826135 kubelet[2607]: E0712 00:27:03.826122 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.826135 kubelet[2607]: W0712 00:27:03.826134 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.826199 kubelet[2607]: E0712 00:27:03.826166 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.826683 kubelet[2607]: E0712 00:27:03.826655 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.826683 kubelet[2607]: W0712 00:27:03.826672 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.826768 kubelet[2607]: E0712 00:27:03.826734 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.826891 kubelet[2607]: E0712 00:27:03.826876 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.826922 kubelet[2607]: W0712 00:27:03.826890 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.826922 kubelet[2607]: E0712 00:27:03.826912 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.827053 kubelet[2607]: E0712 00:27:03.827043 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.827092 kubelet[2607]: W0712 00:27:03.827055 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.827092 kubelet[2607]: E0712 00:27:03.827072 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.827204 kubelet[2607]: E0712 00:27:03.827194 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.827204 kubelet[2607]: W0712 00:27:03.827204 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.827290 kubelet[2607]: E0712 00:27:03.827222 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.827380 kubelet[2607]: E0712 00:27:03.827369 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.827380 kubelet[2607]: W0712 00:27:03.827380 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.827438 kubelet[2607]: E0712 00:27:03.827395 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.827547 kubelet[2607]: E0712 00:27:03.827536 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.827547 kubelet[2607]: W0712 00:27:03.827547 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.827603 kubelet[2607]: E0712 00:27:03.827559 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.827704 kubelet[2607]: E0712 00:27:03.827695 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.827736 kubelet[2607]: W0712 00:27:03.827704 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.827736 kubelet[2607]: E0712 00:27:03.827715 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.834266 kubelet[2607]: E0712 00:27:03.834224 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.834266 kubelet[2607]: W0712 00:27:03.834260 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.834407 kubelet[2607]: E0712 00:27:03.834281 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.834548 kubelet[2607]: E0712 00:27:03.834531 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.834585 kubelet[2607]: W0712 00:27:03.834549 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.834585 kubelet[2607]: E0712 00:27:03.834567 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.834733 kubelet[2607]: E0712 00:27:03.834722 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.834733 kubelet[2607]: W0712 00:27:03.834733 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.834833 kubelet[2607]: E0712 00:27:03.834806 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.834878 kubelet[2607]: E0712 00:27:03.834867 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.834878 kubelet[2607]: W0712 00:27:03.834877 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.834958 kubelet[2607]: E0712 00:27:03.834944 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.835015 kubelet[2607]: E0712 00:27:03.835005 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.835048 kubelet[2607]: W0712 00:27:03.835015 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.835104 kubelet[2607]: E0712 00:27:03.835083 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.835160 kubelet[2607]: E0712 00:27:03.835149 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.835160 kubelet[2607]: W0712 00:27:03.835159 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.835255 kubelet[2607]: E0712 00:27:03.835224 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.835342 kubelet[2607]: E0712 00:27:03.835332 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.835368 kubelet[2607]: W0712 00:27:03.835343 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.835368 kubelet[2607]: E0712 00:27:03.835356 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.835653 kubelet[2607]: E0712 00:27:03.835584 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.835653 kubelet[2607]: W0712 00:27:03.835594 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.835653 kubelet[2607]: E0712 00:27:03.835610 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.836322 kubelet[2607]: E0712 00:27:03.835771 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.836322 kubelet[2607]: W0712 00:27:03.835783 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.836322 kubelet[2607]: E0712 00:27:03.835801 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.836322 kubelet[2607]: E0712 00:27:03.835969 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.836322 kubelet[2607]: W0712 00:27:03.835977 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.836322 kubelet[2607]: E0712 00:27:03.835991 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.836322 kubelet[2607]: E0712 00:27:03.836126 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.836322 kubelet[2607]: W0712 00:27:03.836133 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.836322 kubelet[2607]: E0712 00:27:03.836148 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.836566 kubelet[2607]: E0712 00:27:03.836412 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.836566 kubelet[2607]: W0712 00:27:03.836423 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.836566 kubelet[2607]: E0712 00:27:03.836444 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.836678 kubelet[2607]: E0712 00:27:03.836659 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.836678 kubelet[2607]: W0712 00:27:03.836673 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.836726 kubelet[2607]: E0712 00:27:03.836685 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.837946 kubelet[2607]: E0712 00:27:03.837919 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.838015 kubelet[2607]: W0712 00:27:03.837954 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.838015 kubelet[2607]: E0712 00:27:03.837968 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:03.846515 kubelet[2607]: E0712 00:27:03.846492 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:03.846515 kubelet[2607]: W0712 00:27:03.846512 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:03.846608 kubelet[2607]: E0712 00:27:03.846529 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:04.350781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21704935.mount: Deactivated successfully. Jul 12 00:27:05.128102 containerd[1535]: time="2025-07-12T00:27:05.128056009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:05.128897 containerd[1535]: time="2025-07-12T00:27:05.128633145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:27:05.129277 containerd[1535]: time="2025-07-12T00:27:05.129252609Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:05.131283 containerd[1535]: time="2025-07-12T00:27:05.131229298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:05.132311 containerd[1535]: time="2025-07-12T00:27:05.132270792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.739652771s" Jul 12 00:27:05.132366 containerd[1535]: time="2025-07-12T00:27:05.132311479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:27:05.133924 containerd[1535]: time="2025-07-12T00:27:05.133879500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:27:05.148804 containerd[1535]: time="2025-07-12T00:27:05.148608356Z" level=info msg="CreateContainer within sandbox \"fba99ab7c4819dc66ff7c30d67b7732d1ed1db3f9b5a2aa7a9132ff261343ac7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:27:05.160328 containerd[1535]: time="2025-07-12T00:27:05.160170563Z" level=info msg="CreateContainer within sandbox \"fba99ab7c4819dc66ff7c30d67b7732d1ed1db3f9b5a2aa7a9132ff261343ac7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"631953666883001a9a6443a141280114c1289c96fee07a3dc88972117a434734\"" Jul 12 00:27:05.160743 containerd[1535]: time="2025-07-12T00:27:05.160709493Z" level=info msg="StartContainer for \"631953666883001a9a6443a141280114c1289c96fee07a3dc88972117a434734\"" Jul 12 00:27:05.250887 containerd[1535]: time="2025-07-12T00:27:05.245493829Z" level=info msg="StartContainer for \"631953666883001a9a6443a141280114c1289c96fee07a3dc88972117a434734\" returns successfully" Jul 12 00:27:05.452878 kubelet[2607]: E0712 00:27:05.452739 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8ld7" podUID="37964168-0f35-42e8-bcb1-d8b1fcfa1415" Jul 12 00:27:05.521527 kubelet[2607]: E0712 00:27:05.521480 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:05.526215 kubelet[2607]: E0712 00:27:05.526186 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.526215 kubelet[2607]: W0712 00:27:05.526208 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.526356 kubelet[2607]: E0712 00:27:05.526227 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.527682 kubelet[2607]: E0712 00:27:05.527654 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.527682 kubelet[2607]: W0712 00:27:05.527676 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.527810 kubelet[2607]: E0712 00:27:05.527690 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.528313 kubelet[2607]: E0712 00:27:05.528268 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.528313 kubelet[2607]: W0712 00:27:05.528284 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.528313 kubelet[2607]: E0712 00:27:05.528297 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.528503 kubelet[2607]: E0712 00:27:05.528493 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.528503 kubelet[2607]: W0712 00:27:05.528504 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.528571 kubelet[2607]: E0712 00:27:05.528512 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.528682 kubelet[2607]: E0712 00:27:05.528671 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.528682 kubelet[2607]: W0712 00:27:05.528681 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.529388 kubelet[2607]: E0712 00:27:05.529292 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.529505 kubelet[2607]: E0712 00:27:05.529493 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.529579 kubelet[2607]: W0712 00:27:05.529567 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.529638 kubelet[2607]: E0712 00:27:05.529627 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.529913 kubelet[2607]: E0712 00:27:05.529856 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.529913 kubelet[2607]: W0712 00:27:05.529867 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.529913 kubelet[2607]: E0712 00:27:05.529877 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.530225 kubelet[2607]: E0712 00:27:05.530157 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.530225 kubelet[2607]: W0712 00:27:05.530168 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.530225 kubelet[2607]: E0712 00:27:05.530178 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.530695 kubelet[2607]: E0712 00:27:05.530599 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.530695 kubelet[2607]: W0712 00:27:05.530612 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.530695 kubelet[2607]: E0712 00:27:05.530625 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.530917 kubelet[2607]: E0712 00:27:05.530860 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.530917 kubelet[2607]: W0712 00:27:05.530871 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.530917 kubelet[2607]: E0712 00:27:05.530882 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.531222 kubelet[2607]: E0712 00:27:05.531153 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.531222 kubelet[2607]: W0712 00:27:05.531165 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.531222 kubelet[2607]: E0712 00:27:05.531176 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.531770 kubelet[2607]: E0712 00:27:05.531560 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.531770 kubelet[2607]: W0712 00:27:05.531577 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.531770 kubelet[2607]: E0712 00:27:05.531588 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.532228 kubelet[2607]: E0712 00:27:05.532120 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.532228 kubelet[2607]: W0712 00:27:05.532135 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.532228 kubelet[2607]: E0712 00:27:05.532146 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.532423 kubelet[2607]: E0712 00:27:05.532411 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.532519 kubelet[2607]: W0712 00:27:05.532464 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.532519 kubelet[2607]: E0712 00:27:05.532477 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.532904 kubelet[2607]: E0712 00:27:05.532768 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.532904 kubelet[2607]: W0712 00:27:05.532782 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.532904 kubelet[2607]: E0712 00:27:05.532793 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.542220 kubelet[2607]: E0712 00:27:05.542199 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.542452 kubelet[2607]: W0712 00:27:05.542353 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.542452 kubelet[2607]: E0712 00:27:05.542378 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.542800 kubelet[2607]: E0712 00:27:05.542705 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.542800 kubelet[2607]: W0712 00:27:05.542717 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.542800 kubelet[2607]: E0712 00:27:05.542738 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.543227 kubelet[2607]: E0712 00:27:05.543091 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.543227 kubelet[2607]: W0712 00:27:05.543103 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.543227 kubelet[2607]: E0712 00:27:05.543120 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.543442 kubelet[2607]: E0712 00:27:05.543429 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.543550 kubelet[2607]: W0712 00:27:05.543484 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.543550 kubelet[2607]: E0712 00:27:05.543506 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.543841 kubelet[2607]: E0712 00:27:05.543788 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.543841 kubelet[2607]: W0712 00:27:05.543800 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.543929 kubelet[2607]: E0712 00:27:05.543833 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.544257 kubelet[2607]: E0712 00:27:05.544146 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.544257 kubelet[2607]: W0712 00:27:05.544159 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.544257 kubelet[2607]: E0712 00:27:05.544183 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.544533 kubelet[2607]: E0712 00:27:05.544471 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.544533 kubelet[2607]: W0712 00:27:05.544482 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.544533 kubelet[2607]: E0712 00:27:05.544513 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.545114 kubelet[2607]: E0712 00:27:05.544997 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.545114 kubelet[2607]: W0712 00:27:05.545011 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.545114 kubelet[2607]: E0712 00:27:05.545028 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.545631 kubelet[2607]: E0712 00:27:05.545439 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.545631 kubelet[2607]: W0712 00:27:05.545453 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.545631 kubelet[2607]: E0712 00:27:05.545470 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.545742 kubelet[2607]: E0712 00:27:05.545730 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.545780 kubelet[2607]: W0712 00:27:05.545745 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.545780 kubelet[2607]: E0712 00:27:05.545773 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.546067 kubelet[2607]: E0712 00:27:05.545985 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.546067 kubelet[2607]: W0712 00:27:05.545995 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.546067 kubelet[2607]: E0712 00:27:05.546020 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.546262 kubelet[2607]: E0712 00:27:05.546161 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.546262 kubelet[2607]: W0712 00:27:05.546171 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.546262 kubelet[2607]: E0712 00:27:05.546221 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.546570 kubelet[2607]: E0712 00:27:05.546362 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.546570 kubelet[2607]: W0712 00:27:05.546372 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.546570 kubelet[2607]: E0712 00:27:05.546387 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.547108 kubelet[2607]: E0712 00:27:05.546661 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.547108 kubelet[2607]: W0712 00:27:05.546675 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.547108 kubelet[2607]: E0712 00:27:05.546687 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.547108 kubelet[2607]: E0712 00:27:05.547054 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.547108 kubelet[2607]: W0712 00:27:05.547070 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.547108 kubelet[2607]: E0712 00:27:05.547080 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.547426 kubelet[2607]: E0712 00:27:05.547262 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.547426 kubelet[2607]: W0712 00:27:05.547274 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.547426 kubelet[2607]: E0712 00:27:05.547283 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.547514 kubelet[2607]: E0712 00:27:05.547499 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.547514 kubelet[2607]: W0712 00:27:05.547510 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.547567 kubelet[2607]: E0712 00:27:05.547519 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:05.548054 kubelet[2607]: E0712 00:27:05.548036 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:27:05.548054 kubelet[2607]: W0712 00:27:05.548053 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:27:05.548166 kubelet[2607]: E0712 00:27:05.548065 2607 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:27:06.014451 containerd[1535]: time="2025-07-12T00:27:06.014380958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:06.015269 containerd[1535]: time="2025-07-12T00:27:06.014954250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:27:06.016983 containerd[1535]: time="2025-07-12T00:27:06.016937926Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:06.017954 containerd[1535]: time="2025-07-12T00:27:06.017919363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:06.019298 containerd[1535]: time="2025-07-12T00:27:06.019259617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 885.346231ms" Jul 12 00:27:06.019345 containerd[1535]: time="2025-07-12T00:27:06.019299783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:27:06.022286 containerd[1535]: time="2025-07-12T00:27:06.022210848Z" level=info msg="CreateContainer within sandbox \"212d16a3794ad17b03a358c6538f9152a6b3ac500478a33e4e4a918aecfde6ab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:27:06.034040 containerd[1535]: time="2025-07-12T00:27:06.033985646Z" level=info msg="CreateContainer within sandbox \"212d16a3794ad17b03a358c6538f9152a6b3ac500478a33e4e4a918aecfde6ab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0cb85a34bbf8a798cf6c7b5c9bb1ed2d85f1633619b37754d7167bbdb1099441\"" Jul 12 00:27:06.038275 containerd[1535]: time="2025-07-12T00:27:06.034504969Z" level=info msg="StartContainer for \"0cb85a34bbf8a798cf6c7b5c9bb1ed2d85f1633619b37754d7167bbdb1099441\"" Jul 12 00:27:06.103886 containerd[1535]: time="2025-07-12T00:27:06.103824790Z" level=info msg="StartContainer for \"0cb85a34bbf8a798cf6c7b5c9bb1ed2d85f1633619b37754d7167bbdb1099441\" returns successfully" Jul 12 00:27:06.282561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cb85a34bbf8a798cf6c7b5c9bb1ed2d85f1633619b37754d7167bbdb1099441-rootfs.mount: Deactivated successfully. Jul 12 00:27:06.396629 containerd[1535]: time="2025-07-12T00:27:06.391722007Z" level=info msg="shim disconnected" id=0cb85a34bbf8a798cf6c7b5c9bb1ed2d85f1633619b37754d7167bbdb1099441 namespace=k8s.io Jul 12 00:27:06.396629 containerd[1535]: time="2025-07-12T00:27:06.396616308Z" level=warning msg="cleaning up after shim disconnected" id=0cb85a34bbf8a798cf6c7b5c9bb1ed2d85f1633619b37754d7167bbdb1099441 namespace=k8s.io Jul 12 00:27:06.396629 containerd[1535]: time="2025-07-12T00:27:06.396632470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:27:06.526209 kubelet[2607]: I0712 00:27:06.526131 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:27:06.529088 kubelet[2607]: E0712 00:27:06.528790 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:06.530053 containerd[1535]: time="2025-07-12T00:27:06.530012272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:27:06.550628 kubelet[2607]: I0712 00:27:06.550486 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c7486f44c-chdqt" podStartSLOduration=2.808309126 podStartE2EDuration="4.550469016s" podCreationTimestamp="2025-07-12 00:27:02 +0000 UTC" firstStartedPulling="2025-07-12 00:27:03.391316823 +0000 UTC m=+22.023869121" lastFinishedPulling="2025-07-12 00:27:05.133476713 +0000 UTC m=+23.766029011" observedRunningTime="2025-07-12 00:27:05.536093238 +0000 UTC m=+24.168645616" watchObservedRunningTime="2025-07-12 00:27:06.550469016 +0000 UTC m=+25.183021274" Jul 12 00:27:07.452562 kubelet[2607]: E0712 00:27:07.452509 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8ld7" podUID="37964168-0f35-42e8-bcb1-d8b1fcfa1415" Jul 12 00:27:08.713518 containerd[1535]: time="2025-07-12T00:27:08.713461873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:08.714675 containerd[1535]: time="2025-07-12T00:27:08.714632525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:27:08.717111 containerd[1535]: time="2025-07-12T00:27:08.715714323Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:08.718195 containerd[1535]: time="2025-07-12T00:27:08.718153201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:08.719049 containerd[1535]: time="2025-07-12T00:27:08.719018808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.188957888s" Jul 12 00:27:08.719119 containerd[1535]: time="2025-07-12T00:27:08.719048172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:27:08.721452 containerd[1535]: time="2025-07-12T00:27:08.721421120Z" level=info msg="CreateContainer within sandbox \"212d16a3794ad17b03a358c6538f9152a6b3ac500478a33e4e4a918aecfde6ab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:27:08.734066 containerd[1535]: time="2025-07-12T00:27:08.734015805Z" level=info msg="CreateContainer within sandbox \"212d16a3794ad17b03a358c6538f9152a6b3ac500478a33e4e4a918aecfde6ab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"16a47198f30444fa9dcd4ffe5088f31c7bdf091e7469267e7fb70238b6d24136\"" Jul 12 00:27:08.735076 containerd[1535]: time="2025-07-12T00:27:08.734952663Z" level=info msg="StartContainer for \"16a47198f30444fa9dcd4ffe5088f31c7bdf091e7469267e7fb70238b6d24136\"" Jul 12 00:27:08.788469 containerd[1535]: time="2025-07-12T00:27:08.788424259Z" level=info msg="StartContainer for \"16a47198f30444fa9dcd4ffe5088f31c7bdf091e7469267e7fb70238b6d24136\" returns successfully" Jul 12 00:27:09.453551 kubelet[2607]: E0712 00:27:09.452507 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8ld7" podUID="37964168-0f35-42e8-bcb1-d8b1fcfa1415" Jul 12 00:27:09.529006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16a47198f30444fa9dcd4ffe5088f31c7bdf091e7469267e7fb70238b6d24136-rootfs.mount: Deactivated successfully. Jul 12 00:27:09.537650 containerd[1535]: time="2025-07-12T00:27:09.536863099Z" level=info msg="shim disconnected" id=16a47198f30444fa9dcd4ffe5088f31c7bdf091e7469267e7fb70238b6d24136 namespace=k8s.io Jul 12 00:27:09.537650 containerd[1535]: time="2025-07-12T00:27:09.536910826Z" level=warning msg="cleaning up after shim disconnected" id=16a47198f30444fa9dcd4ffe5088f31c7bdf091e7469267e7fb70238b6d24136 namespace=k8s.io Jul 12 00:27:09.537650 containerd[1535]: time="2025-07-12T00:27:09.536920348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:27:09.541788 kubelet[2607]: I0712 00:27:09.541759 2607 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:27:09.710682 kubelet[2607]: I0712 00:27:09.710559 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h2l7\" (UniqueName: \"kubernetes.io/projected/e5ce9aa9-2aec-4285-9ae0-962553767dc1-kube-api-access-9h2l7\") pod \"calico-kube-controllers-79f945c777-w6fsx\" (UID: \"e5ce9aa9-2aec-4285-9ae0-962553767dc1\") " pod="calico-system/calico-kube-controllers-79f945c777-w6fsx" Jul 12 00:27:09.710682 kubelet[2607]: I0712 00:27:09.710605 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f68a8a6e-2029-41f7-af68-f0e9a3b5f706-calico-apiserver-certs\") pod \"calico-apiserver-59f6799769-2znr7\" (UID: \"f68a8a6e-2029-41f7-af68-f0e9a3b5f706\") " pod="calico-apiserver/calico-apiserver-59f6799769-2znr7" Jul 12 00:27:09.710682 kubelet[2607]: I0712 00:27:09.710625 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4mf4\" (UniqueName: \"kubernetes.io/projected/2ce3af4a-44f7-483b-9fa4-a5cd1f72b652-kube-api-access-j4mf4\") pod \"coredns-7c65d6cfc9-prwgn\" (UID: \"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652\") " pod="kube-system/coredns-7c65d6cfc9-prwgn" Jul 12 00:27:09.710682 kubelet[2607]: I0712 00:27:09.710643 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmjn\" (UniqueName: \"kubernetes.io/projected/b4305437-9cc5-4131-a2db-7fb983a5778e-kube-api-access-kfmjn\") pod \"coredns-7c65d6cfc9-gjr7m\" (UID: \"b4305437-9cc5-4131-a2db-7fb983a5778e\") " pod="kube-system/coredns-7c65d6cfc9-gjr7m" Jul 12 00:27:09.710682 kubelet[2607]: I0712 00:27:09.710672 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-backend-key-pair\") pod \"whisker-b7c4d74cb-6q9dz\" (UID: \"579201fa-7a7d-4798-905a-ffd457a5f297\") " pod="calico-system/whisker-b7c4d74cb-6q9dz" Jul 12 00:27:09.710898 kubelet[2607]: I0712 00:27:09.710689 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-ca-bundle\") pod \"whisker-b7c4d74cb-6q9dz\" (UID: \"579201fa-7a7d-4798-905a-ffd457a5f297\") " pod="calico-system/whisker-b7c4d74cb-6q9dz" Jul 12 00:27:09.710898 kubelet[2607]: I0712 00:27:09.710707 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7pzw\" (UniqueName: \"kubernetes.io/projected/579201fa-7a7d-4798-905a-ffd457a5f297-kube-api-access-f7pzw\") pod \"whisker-b7c4d74cb-6q9dz\" (UID: \"579201fa-7a7d-4798-905a-ffd457a5f297\") " pod="calico-system/whisker-b7c4d74cb-6q9dz" Jul 12 00:27:09.710898 kubelet[2607]: I0712 00:27:09.710726 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f5a9e1e-e147-4c32-bfe2-69710b77be5f-calico-apiserver-certs\") pod \"calico-apiserver-59f6799769-qhdzl\" (UID: \"3f5a9e1e-e147-4c32-bfe2-69710b77be5f\") " pod="calico-apiserver/calico-apiserver-59f6799769-qhdzl" Jul 12 00:27:09.710898 kubelet[2607]: I0712 00:27:09.710741 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqwv\" (UniqueName: \"kubernetes.io/projected/3f5a9e1e-e147-4c32-bfe2-69710b77be5f-kube-api-access-spqwv\") pod \"calico-apiserver-59f6799769-qhdzl\" (UID: \"3f5a9e1e-e147-4c32-bfe2-69710b77be5f\") " pod="calico-apiserver/calico-apiserver-59f6799769-qhdzl" Jul 12 00:27:09.710898 kubelet[2607]: I0712 00:27:09.710757 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4305437-9cc5-4131-a2db-7fb983a5778e-config-volume\") pod \"coredns-7c65d6cfc9-gjr7m\" (UID: \"b4305437-9cc5-4131-a2db-7fb983a5778e\") " pod="kube-system/coredns-7c65d6cfc9-gjr7m" Jul 12 00:27:09.711023 kubelet[2607]: I0712 00:27:09.710789 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtjs\" (UniqueName: \"kubernetes.io/projected/d9b719b0-599b-4efc-90b7-09fca6dfcce5-kube-api-access-qmtjs\") pod \"goldmane-58fd7646b9-vmwvl\" (UID: \"d9b719b0-599b-4efc-90b7-09fca6dfcce5\") " pod="calico-system/goldmane-58fd7646b9-vmwvl" Jul 12 00:27:09.711023 kubelet[2607]: I0712 00:27:09.710806 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ce9aa9-2aec-4285-9ae0-962553767dc1-tigera-ca-bundle\") pod \"calico-kube-controllers-79f945c777-w6fsx\" (UID: \"e5ce9aa9-2aec-4285-9ae0-962553767dc1\") " pod="calico-system/calico-kube-controllers-79f945c777-w6fsx" Jul 12 00:27:09.711023 kubelet[2607]: I0712 00:27:09.710820 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b719b0-599b-4efc-90b7-09fca6dfcce5-config\") pod \"goldmane-58fd7646b9-vmwvl\" (UID: \"d9b719b0-599b-4efc-90b7-09fca6dfcce5\") " pod="calico-system/goldmane-58fd7646b9-vmwvl" Jul 12 00:27:09.711023 kubelet[2607]: I0712 00:27:09.710834 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ce3af4a-44f7-483b-9fa4-a5cd1f72b652-config-volume\") pod \"coredns-7c65d6cfc9-prwgn\" (UID: \"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652\") " pod="kube-system/coredns-7c65d6cfc9-prwgn" Jul 12 00:27:09.711023 kubelet[2607]: I0712 00:27:09.710851 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgbf\" (UniqueName: \"kubernetes.io/projected/f68a8a6e-2029-41f7-af68-f0e9a3b5f706-kube-api-access-fpgbf\") pod \"calico-apiserver-59f6799769-2znr7\" (UID: \"f68a8a6e-2029-41f7-af68-f0e9a3b5f706\") " pod="calico-apiserver/calico-apiserver-59f6799769-2znr7" Jul 12 00:27:09.711138 kubelet[2607]: I0712 00:27:09.710872 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9b719b0-599b-4efc-90b7-09fca6dfcce5-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-vmwvl\" (UID: \"d9b719b0-599b-4efc-90b7-09fca6dfcce5\") " pod="calico-system/goldmane-58fd7646b9-vmwvl" Jul 12 00:27:09.711138 kubelet[2607]: I0712 00:27:09.710888 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d9b719b0-599b-4efc-90b7-09fca6dfcce5-goldmane-key-pair\") pod \"goldmane-58fd7646b9-vmwvl\" (UID: \"d9b719b0-599b-4efc-90b7-09fca6dfcce5\") " pod="calico-system/goldmane-58fd7646b9-vmwvl" Jul 12 00:27:09.892421 kubelet[2607]: E0712 00:27:09.892363 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:09.893004 containerd[1535]: time="2025-07-12T00:27:09.892959543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjr7m,Uid:b4305437-9cc5-4131-a2db-7fb983a5778e,Namespace:kube-system,Attempt:0,}" Jul 12 00:27:09.898535 containerd[1535]: time="2025-07-12T00:27:09.898489241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-qhdzl,Uid:3f5a9e1e-e147-4c32-bfe2-69710b77be5f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:27:09.900013 kubelet[2607]: E0712 00:27:09.899984 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:09.900446 containerd[1535]: time="2025-07-12T00:27:09.900366585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-prwgn,Uid:2ce3af4a-44f7-483b-9fa4-a5cd1f72b652,Namespace:kube-system,Attempt:0,}" Jul 12 00:27:09.901786 containerd[1535]: time="2025-07-12T00:27:09.901748259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7c4d74cb-6q9dz,Uid:579201fa-7a7d-4798-905a-ffd457a5f297,Namespace:calico-system,Attempt:0,}" Jul 12 00:27:09.909261 containerd[1535]: time="2025-07-12T00:27:09.906649748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f945c777-w6fsx,Uid:e5ce9aa9-2aec-4285-9ae0-962553767dc1,Namespace:calico-system,Attempt:0,}" Jul 12 00:27:09.909902 containerd[1535]: time="2025-07-12T00:27:09.909853959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-2znr7,Uid:f68a8a6e-2029-41f7-af68-f0e9a3b5f706,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:27:09.910148 containerd[1535]: time="2025-07-12T00:27:09.910126077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vmwvl,Uid:d9b719b0-599b-4efc-90b7-09fca6dfcce5,Namespace:calico-system,Attempt:0,}" Jul 12 00:27:10.398292 containerd[1535]: time="2025-07-12T00:27:10.398130477Z" level=error msg="Failed to destroy network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.398746 containerd[1535]: time="2025-07-12T00:27:10.398714956Z" level=error msg="encountered an error cleaning up failed sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.398975 containerd[1535]: time="2025-07-12T00:27:10.398850534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-qhdzl,Uid:3f5a9e1e-e147-4c32-bfe2-69710b77be5f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.401510 kubelet[2607]: E0712 00:27:10.401322 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.405057 containerd[1535]: time="2025-07-12T00:27:10.405012326Z" level=error msg="Failed to destroy network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.405408 containerd[1535]: time="2025-07-12T00:27:10.405371175Z" level=error msg="encountered an error cleaning up failed sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.405619 containerd[1535]: time="2025-07-12T00:27:10.405434703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-prwgn,Uid:2ce3af4a-44f7-483b-9fa4-a5cd1f72b652,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.407250 kubelet[2607]: E0712 00:27:10.407012 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6799769-qhdzl" Jul 12 00:27:10.407250 kubelet[2607]: E0712 00:27:10.407075 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6799769-qhdzl" Jul 12 00:27:10.407250 kubelet[2607]: E0712 00:27:10.407127 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f6799769-qhdzl_calico-apiserver(3f5a9e1e-e147-4c32-bfe2-69710b77be5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f6799769-qhdzl_calico-apiserver(3f5a9e1e-e147-4c32-bfe2-69710b77be5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f6799769-qhdzl" podUID="3f5a9e1e-e147-4c32-bfe2-69710b77be5f" Jul 12 00:27:10.410337 containerd[1535]: time="2025-07-12T00:27:10.410291800Z" level=error msg="Failed to destroy network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.410950 containerd[1535]: time="2025-07-12T00:27:10.410913244Z" level=error msg="encountered an error cleaning up failed sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.411502 containerd[1535]: time="2025-07-12T00:27:10.410964411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7c4d74cb-6q9dz,Uid:579201fa-7a7d-4798-905a-ffd457a5f297,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.411559 kubelet[2607]: E0712 00:27:10.411266 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.413705 kubelet[2607]: E0712 00:27:10.412975 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.413705 kubelet[2607]: E0712 00:27:10.413029 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-prwgn" Jul 12 00:27:10.413705 kubelet[2607]: E0712 00:27:10.413048 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-prwgn" Jul 12 00:27:10.413882 kubelet[2607]: E0712 00:27:10.413097 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-prwgn_kube-system(2ce3af4a-44f7-483b-9fa4-a5cd1f72b652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-prwgn_kube-system(2ce3af4a-44f7-483b-9fa4-a5cd1f72b652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-prwgn" podUID="2ce3af4a-44f7-483b-9fa4-a5cd1f72b652" Jul 12 00:27:10.415283 kubelet[2607]: E0712 00:27:10.411315 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b7c4d74cb-6q9dz" Jul 12 00:27:10.415283 kubelet[2607]: E0712 00:27:10.415118 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b7c4d74cb-6q9dz" Jul 12 00:27:10.415283 kubelet[2607]: E0712 00:27:10.415166 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b7c4d74cb-6q9dz_calico-system(579201fa-7a7d-4798-905a-ffd457a5f297)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b7c4d74cb-6q9dz_calico-system(579201fa-7a7d-4798-905a-ffd457a5f297)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b7c4d74cb-6q9dz" podUID="579201fa-7a7d-4798-905a-ffd457a5f297" Jul 12 00:27:10.417225 containerd[1535]: time="2025-07-12T00:27:10.417186811Z" level=error msg="Failed to destroy network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.419431 containerd[1535]: time="2025-07-12T00:27:10.419383788Z" level=error msg="encountered an error cleaning up failed sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.419491 containerd[1535]: time="2025-07-12T00:27:10.419450277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f945c777-w6fsx,Uid:e5ce9aa9-2aec-4285-9ae0-962553767dc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.419760 kubelet[2607]: E0712 00:27:10.419623 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.419856 kubelet[2607]: E0712 00:27:10.419769 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f945c777-w6fsx" Jul 12 00:27:10.419856 kubelet[2607]: E0712 00:27:10.419786 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f945c777-w6fsx" Jul 12 00:27:10.419856 kubelet[2607]: E0712 00:27:10.419822 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79f945c777-w6fsx_calico-system(e5ce9aa9-2aec-4285-9ae0-962553767dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79f945c777-w6fsx_calico-system(e5ce9aa9-2aec-4285-9ae0-962553767dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79f945c777-w6fsx" podUID="e5ce9aa9-2aec-4285-9ae0-962553767dc1" Jul 12 00:27:10.420162 containerd[1535]: time="2025-07-12T00:27:10.420043277Z" level=error msg="Failed to destroy network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.420559 containerd[1535]: time="2025-07-12T00:27:10.420515381Z" level=error msg="encountered an error cleaning up failed sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.420593 containerd[1535]: time="2025-07-12T00:27:10.420562347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vmwvl,Uid:d9b719b0-599b-4efc-90b7-09fca6dfcce5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.421685 kubelet[2607]: E0712 00:27:10.421650 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.421736 kubelet[2607]: E0712 00:27:10.421693 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-vmwvl" Jul 12 00:27:10.421736 kubelet[2607]: E0712 00:27:10.421708 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-vmwvl" Jul 12 00:27:10.421790 kubelet[2607]: E0712 00:27:10.421735 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-vmwvl_calico-system(d9b719b0-599b-4efc-90b7-09fca6dfcce5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-vmwvl_calico-system(d9b719b0-599b-4efc-90b7-09fca6dfcce5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-vmwvl" podUID="d9b719b0-599b-4efc-90b7-09fca6dfcce5" Jul 12 00:27:10.430369 containerd[1535]: time="2025-07-12T00:27:10.430322426Z" level=error msg="Failed to destroy network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.430672 containerd[1535]: time="2025-07-12T00:27:10.430636229Z" level=error msg="encountered an error cleaning up failed sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.430723 containerd[1535]: time="2025-07-12T00:27:10.430697597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-2znr7,Uid:f68a8a6e-2029-41f7-af68-f0e9a3b5f706,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.430904 kubelet[2607]: E0712 00:27:10.430861 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.430960 kubelet[2607]: E0712 00:27:10.430911 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6799769-2znr7" Jul 12 00:27:10.430960 kubelet[2607]: E0712 00:27:10.430929 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6799769-2znr7" Jul 12 00:27:10.431011 kubelet[2607]: E0712 00:27:10.430968 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f6799769-2znr7_calico-apiserver(f68a8a6e-2029-41f7-af68-f0e9a3b5f706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f6799769-2znr7_calico-apiserver(f68a8a6e-2029-41f7-af68-f0e9a3b5f706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f6799769-2znr7" podUID="f68a8a6e-2029-41f7-af68-f0e9a3b5f706" Jul 12 00:27:10.432741 containerd[1535]: time="2025-07-12T00:27:10.432698627Z" level=error msg="Failed to destroy network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.433066 containerd[1535]: time="2025-07-12T00:27:10.432966903Z" level=error msg="encountered an error cleaning up failed sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.433630 containerd[1535]: time="2025-07-12T00:27:10.433544741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjr7m,Uid:b4305437-9cc5-4131-a2db-7fb983a5778e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.433751 kubelet[2607]: E0712 00:27:10.433706 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.433788 kubelet[2607]: E0712 00:27:10.433770 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gjr7m" Jul 12 00:27:10.433821 kubelet[2607]: E0712 00:27:10.433786 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gjr7m" Jul 12 00:27:10.433849 kubelet[2607]: E0712 00:27:10.433815 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gjr7m_kube-system(b4305437-9cc5-4131-a2db-7fb983a5778e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gjr7m_kube-system(b4305437-9cc5-4131-a2db-7fb983a5778e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gjr7m" podUID="b4305437-9cc5-4131-a2db-7fb983a5778e" Jul 12 00:27:10.539407 kubelet[2607]: I0712 00:27:10.539376 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:10.541862 containerd[1535]: time="2025-07-12T00:27:10.541812250Z" level=info msg="StopPodSandbox for \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\"" Jul 12 00:27:10.542143 containerd[1535]: time="2025-07-12T00:27:10.542106049Z" level=info msg="Ensure that sandbox 64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108 in task-service has been cleanup successfully" Jul 12 00:27:10.542529 kubelet[2607]: I0712 00:27:10.542499 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:10.543412 containerd[1535]: time="2025-07-12T00:27:10.543382982Z" level=info msg="StopPodSandbox for \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\"" Jul 12 00:27:10.543737 kubelet[2607]: I0712 00:27:10.543684 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:10.543789 containerd[1535]: time="2025-07-12T00:27:10.543687143Z" level=info msg="Ensure that sandbox 7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62 in task-service has been cleanup successfully" Jul 12 00:27:10.544833 containerd[1535]: time="2025-07-12T00:27:10.544449046Z" level=info msg="StopPodSandbox for \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\"" Jul 12 00:27:10.544833 containerd[1535]: time="2025-07-12T00:27:10.544626390Z" level=info msg="Ensure that sandbox 128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059 in task-service has been cleanup successfully" Jul 12 00:27:10.549961 kubelet[2607]: I0712 00:27:10.549926 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:10.550803 containerd[1535]: time="2025-07-12T00:27:10.550398690Z" level=info msg="StopPodSandbox for \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\"" Jul 12 00:27:10.550803 containerd[1535]: time="2025-07-12T00:27:10.550566873Z" level=info msg="Ensure that sandbox 808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c in task-service has been cleanup successfully" Jul 12 00:27:10.558710 containerd[1535]: time="2025-07-12T00:27:10.558675928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:27:10.563551 kubelet[2607]: I0712 00:27:10.563519 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:10.564867 containerd[1535]: time="2025-07-12T00:27:10.564831000Z" level=info msg="StopPodSandbox for \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\"" Jul 12 00:27:10.566672 kubelet[2607]: I0712 00:27:10.566637 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:10.570478 containerd[1535]: time="2025-07-12T00:27:10.570441918Z" level=info msg="Ensure that sandbox d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9 in task-service has been cleanup successfully" Jul 12 00:27:10.571106 containerd[1535]: time="2025-07-12T00:27:10.571076444Z" level=info msg="StopPodSandbox for \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\"" Jul 12 00:27:10.571284 containerd[1535]: time="2025-07-12T00:27:10.571265589Z" level=info msg="Ensure that sandbox 48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d in task-service has been cleanup successfully" Jul 12 00:27:10.573463 kubelet[2607]: I0712 00:27:10.573418 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:10.577216 containerd[1535]: time="2025-07-12T00:27:10.576747490Z" level=info msg="StopPodSandbox for \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\"" Jul 12 00:27:10.578919 containerd[1535]: time="2025-07-12T00:27:10.578713636Z" level=info msg="Ensure that sandbox 693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466 in task-service has been cleanup successfully" Jul 12 00:27:10.596568 containerd[1535]: time="2025-07-12T00:27:10.596519081Z" level=error msg="StopPodSandbox for \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\" failed" error="failed to destroy network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.597094 kubelet[2607]: E0712 00:27:10.596972 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:10.597174 kubelet[2607]: E0712 00:27:10.597066 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059"} Jul 12 00:27:10.597174 kubelet[2607]: E0712 00:27:10.597139 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5ce9aa9-2aec-4285-9ae0-962553767dc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:10.597174 kubelet[2607]: E0712 00:27:10.597163 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5ce9aa9-2aec-4285-9ae0-962553767dc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79f945c777-w6fsx" podUID="e5ce9aa9-2aec-4285-9ae0-962553767dc1" Jul 12 00:27:10.602856 containerd[1535]: time="2025-07-12T00:27:10.602732441Z" level=error msg="StopPodSandbox for \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\" failed" error="failed to destroy network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.603034 kubelet[2607]: E0712 00:27:10.602978 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:10.603101 kubelet[2607]: E0712 00:27:10.603034 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108"} Jul 12 00:27:10.603101 kubelet[2607]: E0712 00:27:10.603068 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:10.603368 kubelet[2607]: E0712 00:27:10.603097 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-prwgn" podUID="2ce3af4a-44f7-483b-9fa4-a5cd1f72b652" Jul 12 00:27:10.603426 containerd[1535]: time="2025-07-12T00:27:10.603206465Z" level=error msg="StopPodSandbox for \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\" failed" error="failed to destroy network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.604825 kubelet[2607]: E0712 00:27:10.604760 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:10.604825 kubelet[2607]: E0712 00:27:10.604813 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c"} Jul 12 00:27:10.604929 kubelet[2607]: E0712 00:27:10.604844 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4305437-9cc5-4131-a2db-7fb983a5778e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:10.604929 kubelet[2607]: E0712 00:27:10.604867 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4305437-9cc5-4131-a2db-7fb983a5778e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gjr7m" podUID="b4305437-9cc5-4131-a2db-7fb983a5778e" Jul 12 00:27:10.620437 containerd[1535]: time="2025-07-12T00:27:10.620388186Z" level=error msg="StopPodSandbox for \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\" failed" error="failed to destroy network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.620646 containerd[1535]: time="2025-07-12T00:27:10.620585493Z" level=error msg="StopPodSandbox for \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\" failed" error="failed to destroy network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.620722 containerd[1535]: time="2025-07-12T00:27:10.620622578Z" level=error msg="StopPodSandbox for \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\" failed" error="failed to destroy network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.620760 kubelet[2607]: E0712 00:27:10.620697 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:10.620760 kubelet[2607]: E0712 00:27:10.620750 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62"} Jul 12 00:27:10.620832 kubelet[2607]: E0712 00:27:10.620782 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f5a9e1e-e147-4c32-bfe2-69710b77be5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:10.620832 kubelet[2607]: E0712 00:27:10.620789 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:10.620832 kubelet[2607]: E0712 00:27:10.620808 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f5a9e1e-e147-4c32-bfe2-69710b77be5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f6799769-qhdzl" podUID="3f5a9e1e-e147-4c32-bfe2-69710b77be5f" Jul 12 00:27:10.620832 kubelet[2607]: E0712 00:27:10.620826 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d"} Jul 12 00:27:10.620966 kubelet[2607]: E0712 00:27:10.620852 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f68a8a6e-2029-41f7-af68-f0e9a3b5f706\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:10.620966 kubelet[2607]: E0712 00:27:10.620871 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f68a8a6e-2029-41f7-af68-f0e9a3b5f706\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f6799769-2znr7" podUID="f68a8a6e-2029-41f7-af68-f0e9a3b5f706" Jul 12 00:27:10.621035 kubelet[2607]: E0712 00:27:10.621009 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:10.621059 kubelet[2607]: E0712 00:27:10.621035 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466"} Jul 12 00:27:10.621081 kubelet[2607]: E0712 00:27:10.621058 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"579201fa-7a7d-4798-905a-ffd457a5f297\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:10.621117 kubelet[2607]: E0712 00:27:10.621076 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"579201fa-7a7d-4798-905a-ffd457a5f297\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b7c4d74cb-6q9dz" podUID="579201fa-7a7d-4798-905a-ffd457a5f297" Jul 12 00:27:10.627150 containerd[1535]: time="2025-07-12T00:27:10.627024843Z" level=error msg="StopPodSandbox for \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\" failed" error="failed to destroy network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:10.627307 kubelet[2607]: E0712 00:27:10.627256 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:10.627372 kubelet[2607]: E0712 00:27:10.627304 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9"} Jul 12 00:27:10.627372 kubelet[2607]: E0712 00:27:10.627333 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9b719b0-599b-4efc-90b7-09fca6dfcce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:10.627372 kubelet[2607]: E0712 00:27:10.627358 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9b719b0-599b-4efc-90b7-09fca6dfcce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-vmwvl" podUID="d9b719b0-599b-4efc-90b7-09fca6dfcce5" Jul 12 00:27:10.819936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c-shm.mount: Deactivated successfully. Jul 12 00:27:11.457271 containerd[1535]: time="2025-07-12T00:27:11.457201406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8ld7,Uid:37964168-0f35-42e8-bcb1-d8b1fcfa1415,Namespace:calico-system,Attempt:0,}" Jul 12 00:27:11.516101 containerd[1535]: time="2025-07-12T00:27:11.516043411Z" level=error msg="Failed to destroy network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:11.518895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c-shm.mount: Deactivated successfully. Jul 12 00:27:11.519593 containerd[1535]: time="2025-07-12T00:27:11.519547626Z" level=error msg="encountered an error cleaning up failed sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:11.519698 containerd[1535]: time="2025-07-12T00:27:11.519629397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8ld7,Uid:37964168-0f35-42e8-bcb1-d8b1fcfa1415,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:11.520175 kubelet[2607]: E0712 00:27:11.519860 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:11.520175 kubelet[2607]: E0712 00:27:11.519926 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g8ld7" Jul 12 00:27:11.520175 kubelet[2607]: E0712 00:27:11.519948 2607 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g8ld7" Jul 12 00:27:11.520339 kubelet[2607]: E0712 00:27:11.519996 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g8ld7_calico-system(37964168-0f35-42e8-bcb1-d8b1fcfa1415)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g8ld7_calico-system(37964168-0f35-42e8-bcb1-d8b1fcfa1415)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g8ld7" podUID="37964168-0f35-42e8-bcb1-d8b1fcfa1415" Jul 12 00:27:11.576659 kubelet[2607]: I0712 00:27:11.576162 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:11.577489 containerd[1535]: time="2025-07-12T00:27:11.577382020Z" level=info msg="StopPodSandbox for \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\"" Jul 12 00:27:11.577590 containerd[1535]: time="2025-07-12T00:27:11.577558403Z" level=info msg="Ensure that sandbox cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c in task-service has been cleanup successfully" Jul 12 00:27:11.607096 containerd[1535]: time="2025-07-12T00:27:11.607025912Z" level=error msg="StopPodSandbox for \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\" failed" error="failed to destroy network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:27:11.607613 kubelet[2607]: E0712 00:27:11.607424 2607 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:11.607613 kubelet[2607]: E0712 00:27:11.607470 2607 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c"} Jul 12 00:27:11.607613 kubelet[2607]: E0712 00:27:11.607506 2607 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37964168-0f35-42e8-bcb1-d8b1fcfa1415\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:27:11.607613 kubelet[2607]: E0712 00:27:11.607527 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37964168-0f35-42e8-bcb1-d8b1fcfa1415\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g8ld7" podUID="37964168-0f35-42e8-bcb1-d8b1fcfa1415" Jul 12 00:27:14.584103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493270826.mount: Deactivated successfully. Jul 12 00:27:14.884681 containerd[1535]: time="2025-07-12T00:27:14.884548546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:14.887800 containerd[1535]: time="2025-07-12T00:27:14.887761760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:27:14.888792 containerd[1535]: time="2025-07-12T00:27:14.888730352Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:14.891436 containerd[1535]: time="2025-07-12T00:27:14.891381900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:14.892261 containerd[1535]: time="2025-07-12T00:27:14.892215117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.333358365s" Jul 12 00:27:14.892324 containerd[1535]: time="2025-07-12T00:27:14.892264403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:27:14.900970 containerd[1535]: time="2025-07-12T00:27:14.900882245Z" level=info msg="CreateContainer within sandbox \"212d16a3794ad17b03a358c6538f9152a6b3ac500478a33e4e4a918aecfde6ab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:27:14.915934 containerd[1535]: time="2025-07-12T00:27:14.915807899Z" level=info msg="CreateContainer within sandbox \"212d16a3794ad17b03a358c6538f9152a6b3ac500478a33e4e4a918aecfde6ab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b59495e9b663c3d422751689f5bf95c60593d9211d75b9599e4f14cd87b036be\"" Jul 12 00:27:14.917307 containerd[1535]: time="2025-07-12T00:27:14.917112411Z" level=info msg="StartContainer for \"b59495e9b663c3d422751689f5bf95c60593d9211d75b9599e4f14cd87b036be\"" Jul 12 00:27:14.932254 kubelet[2607]: I0712 00:27:14.931987 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:27:14.933289 kubelet[2607]: E0712 00:27:14.932902 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:15.007513 containerd[1535]: time="2025-07-12T00:27:15.007456086Z" level=info msg="StartContainer for \"b59495e9b663c3d422751689f5bf95c60593d9211d75b9599e4f14cd87b036be\" returns successfully" Jul 12 00:27:15.200554 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:27:15.200667 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:27:15.332786 containerd[1535]: time="2025-07-12T00:27:15.332607174Z" level=info msg="StopPodSandbox for \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\"" Jul 12 00:27:15.591812 kubelet[2607]: E0712 00:27:15.591713 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:15.611520 kubelet[2607]: I0712 00:27:15.611428 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7k78z" podStartSLOduration=1.417689126 podStartE2EDuration="12.61139998s" podCreationTimestamp="2025-07-12 00:27:03 +0000 UTC" firstStartedPulling="2025-07-12 00:27:03.699327599 +0000 UTC m=+22.331879897" lastFinishedPulling="2025-07-12 00:27:14.893038453 +0000 UTC m=+33.525590751" observedRunningTime="2025-07-12 00:27:15.61140274 +0000 UTC m=+34.243954998" watchObservedRunningTime="2025-07-12 00:27:15.61139998 +0000 UTC m=+34.243952238" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.538 [INFO][3904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.541 [INFO][3904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" iface="eth0" netns="/var/run/netns/cni-cc5b67e8-99ac-1e7d-ba0c-049448350b8a" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.542 [INFO][3904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" iface="eth0" netns="/var/run/netns/cni-cc5b67e8-99ac-1e7d-ba0c-049448350b8a" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.542 [INFO][3904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" iface="eth0" netns="/var/run/netns/cni-cc5b67e8-99ac-1e7d-ba0c-049448350b8a" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.543 [INFO][3904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.543 [INFO][3904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.706 [INFO][3913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.706 [INFO][3913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.706 [INFO][3913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.716 [WARNING][3913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.716 [INFO][3913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.718 [INFO][3913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:15.722797 containerd[1535]: 2025-07-12 00:27:15.720 [INFO][3904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:15.725053 systemd[1]: run-netns-cni\x2dcc5b67e8\x2d99ac\x2d1e7d\x2dba0c\x2d049448350b8a.mount: Deactivated successfully. Jul 12 00:27:15.725466 containerd[1535]: time="2025-07-12T00:27:15.725428536Z" level=info msg="TearDown network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\" successfully" Jul 12 00:27:15.725515 containerd[1535]: time="2025-07-12T00:27:15.725467580Z" level=info msg="StopPodSandbox for \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\" returns successfully" Jul 12 00:27:15.859912 kubelet[2607]: I0712 00:27:15.859465 2607 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-backend-key-pair\") pod \"579201fa-7a7d-4798-905a-ffd457a5f297\" (UID: \"579201fa-7a7d-4798-905a-ffd457a5f297\") " Jul 12 00:27:15.859912 kubelet[2607]: I0712 00:27:15.859508 2607 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7pzw\" (UniqueName: \"kubernetes.io/projected/579201fa-7a7d-4798-905a-ffd457a5f297-kube-api-access-f7pzw\") pod \"579201fa-7a7d-4798-905a-ffd457a5f297\" (UID: \"579201fa-7a7d-4798-905a-ffd457a5f297\") " Jul 12 00:27:15.859912 kubelet[2607]: I0712 00:27:15.859529 2607 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-ca-bundle\") pod \"579201fa-7a7d-4798-905a-ffd457a5f297\" (UID: \"579201fa-7a7d-4798-905a-ffd457a5f297\") " Jul 12 00:27:15.859912 kubelet[2607]: I0712 00:27:15.859866 2607 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "579201fa-7a7d-4798-905a-ffd457a5f297" (UID: "579201fa-7a7d-4798-905a-ffd457a5f297"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:27:15.863637 kubelet[2607]: I0712 00:27:15.863586 2607 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579201fa-7a7d-4798-905a-ffd457a5f297-kube-api-access-f7pzw" (OuterVolumeSpecName: "kube-api-access-f7pzw") pod "579201fa-7a7d-4798-905a-ffd457a5f297" (UID: "579201fa-7a7d-4798-905a-ffd457a5f297"). InnerVolumeSpecName "kube-api-access-f7pzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:15.864971 systemd[1]: var-lib-kubelet-pods-579201fa\x2d7a7d\x2d4798\x2d905a\x2dffd457a5f297-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df7pzw.mount: Deactivated successfully. Jul 12 00:27:15.870362 systemd[1]: var-lib-kubelet-pods-579201fa\x2d7a7d\x2d4798\x2d905a\x2dffd457a5f297-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:27:15.870940 kubelet[2607]: I0712 00:27:15.870890 2607 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "579201fa-7a7d-4798-905a-ffd457a5f297" (UID: "579201fa-7a7d-4798-905a-ffd457a5f297"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:27:15.960901 kubelet[2607]: I0712 00:27:15.960848 2607 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 00:27:15.960901 kubelet[2607]: I0712 00:27:15.960890 2607 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7pzw\" (UniqueName: \"kubernetes.io/projected/579201fa-7a7d-4798-905a-ffd457a5f297-kube-api-access-f7pzw\") on node \"localhost\" DevicePath \"\"" Jul 12 00:27:15.960901 kubelet[2607]: I0712 00:27:15.960902 2607 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/579201fa-7a7d-4798-905a-ffd457a5f297-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 00:27:16.593185 kubelet[2607]: I0712 00:27:16.593159 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:27:16.766175 kubelet[2607]: I0712 00:27:16.766129 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b3208c-71e5-428a-b5a6-daccbb3dab8e-whisker-ca-bundle\") pod \"whisker-6fdbd4fdd-jdrhw\" (UID: \"e0b3208c-71e5-428a-b5a6-daccbb3dab8e\") " pod="calico-system/whisker-6fdbd4fdd-jdrhw" Jul 12 00:27:16.766348 kubelet[2607]: I0712 00:27:16.766218 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0b3208c-71e5-428a-b5a6-daccbb3dab8e-whisker-backend-key-pair\") pod \"whisker-6fdbd4fdd-jdrhw\" (UID: \"e0b3208c-71e5-428a-b5a6-daccbb3dab8e\") " pod="calico-system/whisker-6fdbd4fdd-jdrhw" Jul 12 00:27:16.766348 kubelet[2607]: I0712 00:27:16.766253 2607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jrj\" (UniqueName: \"kubernetes.io/projected/e0b3208c-71e5-428a-b5a6-daccbb3dab8e-kube-api-access-k7jrj\") pod \"whisker-6fdbd4fdd-jdrhw\" (UID: \"e0b3208c-71e5-428a-b5a6-daccbb3dab8e\") " pod="calico-system/whisker-6fdbd4fdd-jdrhw" Jul 12 00:27:16.955888 containerd[1535]: time="2025-07-12T00:27:16.955779622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdbd4fdd-jdrhw,Uid:e0b3208c-71e5-428a-b5a6-daccbb3dab8e,Namespace:calico-system,Attempt:0,}" Jul 12 00:27:16.967258 kernel: bpftool[4063]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:27:17.141038 systemd-networkd[1234]: vxlan.calico: Link UP Jul 12 00:27:17.141049 systemd-networkd[1234]: vxlan.calico: Gained carrier Jul 12 00:27:17.192728 systemd-networkd[1234]: calif6ccc0d09e9: Link UP Jul 12 00:27:17.192967 systemd-networkd[1234]: calif6ccc0d09e9: Gained carrier Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.096 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0 whisker-6fdbd4fdd- calico-system e0b3208c-71e5-428a-b5a6-daccbb3dab8e 938 0 2025-07-12 00:27:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fdbd4fdd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6fdbd4fdd-jdrhw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif6ccc0d09e9 [] [] }} ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.096 [INFO][4064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.125 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" HandleID="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Workload="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.125 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" HandleID="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Workload="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6fdbd4fdd-jdrhw", "timestamp":"2025-07-12 00:27:17.125404703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.125 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.125 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.125 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.142 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.164 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.170 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.172 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.174 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.174 [INFO][4092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.176 [INFO][4092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84 Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.180 [INFO][4092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.185 [INFO][4092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.185 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" host="localhost" Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.185 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:17.212098 containerd[1535]: 2025-07-12 00:27:17.185 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" HandleID="k8s-pod-network.47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Workload="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" Jul 12 00:27:17.213623 containerd[1535]: 2025-07-12 00:27:17.187 [INFO][4064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0", GenerateName:"whisker-6fdbd4fdd-", Namespace:"calico-system", SelfLink:"", UID:"e0b3208c-71e5-428a-b5a6-daccbb3dab8e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fdbd4fdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6fdbd4fdd-jdrhw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif6ccc0d09e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:17.213623 containerd[1535]: 2025-07-12 00:27:17.187 [INFO][4064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" Jul 12 00:27:17.213623 containerd[1535]: 2025-07-12 00:27:17.187 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6ccc0d09e9 ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" Jul 12 00:27:17.213623 containerd[1535]: 2025-07-12 00:27:17.195 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" Jul 12 00:27:17.213623 containerd[1535]: 2025-07-12 00:27:17.196 [INFO][4064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0", GenerateName:"whisker-6fdbd4fdd-", Namespace:"calico-system", SelfLink:"", UID:"e0b3208c-71e5-428a-b5a6-daccbb3dab8e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fdbd4fdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84", Pod:"whisker-6fdbd4fdd-jdrhw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif6ccc0d09e9", MAC:"de:9f:43:23:1a:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:17.213623 containerd[1535]: 2025-07-12 00:27:17.208 [INFO][4064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84" Namespace="calico-system" Pod="whisker-6fdbd4fdd-jdrhw" WorkloadEndpoint="localhost-k8s-whisker--6fdbd4fdd--jdrhw-eth0" Jul 12 00:27:17.272272 containerd[1535]: time="2025-07-12T00:27:17.270711551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:17.272272 containerd[1535]: time="2025-07-12T00:27:17.271625927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:17.272272 containerd[1535]: time="2025-07-12T00:27:17.271639448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:17.273865 containerd[1535]: time="2025-07-12T00:27:17.273624137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:17.298028 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:17.321995 containerd[1535]: time="2025-07-12T00:27:17.321958089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdbd4fdd-jdrhw,Uid:e0b3208c-71e5-428a-b5a6-daccbb3dab8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84\"" Jul 12 00:27:17.323581 containerd[1535]: time="2025-07-12T00:27:17.323526693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:27:17.454641 kubelet[2607]: I0712 00:27:17.454605 2607 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579201fa-7a7d-4798-905a-ffd457a5f297" path="/var/lib/kubelet/pods/579201fa-7a7d-4798-905a-ffd457a5f297/volumes" Jul 12 00:27:18.229078 containerd[1535]: time="2025-07-12T00:27:18.229028603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:18.229558 containerd[1535]: time="2025-07-12T00:27:18.229532574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:27:18.234267 containerd[1535]: time="2025-07-12T00:27:18.234071075Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:18.236229 containerd[1535]: time="2025-07-12T00:27:18.236188651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:18.236894 containerd[1535]: time="2025-07-12T00:27:18.236867920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 912.562025ms" Jul 12 00:27:18.236952 containerd[1535]: time="2025-07-12T00:27:18.236901403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:27:18.238779 containerd[1535]: time="2025-07-12T00:27:18.238686825Z" level=info msg="CreateContainer within sandbox \"47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:27:18.251245 containerd[1535]: time="2025-07-12T00:27:18.251194416Z" level=info msg="CreateContainer within sandbox \"47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1bdef0420c5449502d155afde0de051aec0ffa093f38949b861e7068497f4f36\"" Jul 12 00:27:18.251749 containerd[1535]: time="2025-07-12T00:27:18.251703108Z" level=info msg="StartContainer for \"1bdef0420c5449502d155afde0de051aec0ffa093f38949b861e7068497f4f36\"" Jul 12 00:27:18.321497 containerd[1535]: time="2025-07-12T00:27:18.321360707Z" level=info msg="StartContainer for \"1bdef0420c5449502d155afde0de051aec0ffa093f38949b861e7068497f4f36\" returns successfully" Jul 12 00:27:18.322424 containerd[1535]: time="2025-07-12T00:27:18.322391212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:27:18.492498 systemd-networkd[1234]: calif6ccc0d09e9: Gained IPv6LL Jul 12 00:27:18.556358 systemd-networkd[1234]: vxlan.calico: Gained IPv6LL Jul 12 00:27:20.106307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456838418.mount: Deactivated successfully. Jul 12 00:27:20.121743 containerd[1535]: time="2025-07-12T00:27:20.121695708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:20.122691 containerd[1535]: time="2025-07-12T00:27:20.122653919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:27:20.123622 containerd[1535]: time="2025-07-12T00:27:20.123301421Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:20.126849 containerd[1535]: time="2025-07-12T00:27:20.126804036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:20.127836 containerd[1535]: time="2025-07-12T00:27:20.127794331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.805366475s" Jul 12 00:27:20.127880 containerd[1535]: time="2025-07-12T00:27:20.127835575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:27:20.136754 containerd[1535]: time="2025-07-12T00:27:20.136711024Z" level=info msg="CreateContainer within sandbox \"47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:27:20.148182 containerd[1535]: time="2025-07-12T00:27:20.148128596Z" level=info msg="CreateContainer within sandbox \"47007324f78077b7947f07b6d1b3e87a1f7cae2eca9930a8f7ffa56bcb67ac84\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2f00ef6295217bd4a528808ef7f7f97400309b1f1646a26aa833be3576473557\"" Jul 12 00:27:20.148700 containerd[1535]: time="2025-07-12T00:27:20.148673488Z" level=info msg="StartContainer for \"2f00ef6295217bd4a528808ef7f7f97400309b1f1646a26aa833be3576473557\"" Jul 12 00:27:20.278962 containerd[1535]: time="2025-07-12T00:27:20.278834936Z" level=info msg="StartContainer for \"2f00ef6295217bd4a528808ef7f7f97400309b1f1646a26aa833be3576473557\" returns successfully" Jul 12 00:27:21.455001 containerd[1535]: time="2025-07-12T00:27:21.453954534Z" level=info msg="StopPodSandbox for \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\"" Jul 12 00:27:21.455001 containerd[1535]: time="2025-07-12T00:27:21.454067705Z" level=info msg="StopPodSandbox for \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\"" Jul 12 00:27:21.455001 containerd[1535]: time="2025-07-12T00:27:21.454453140Z" level=info msg="StopPodSandbox for \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\"" Jul 12 00:27:21.508706 kubelet[2607]: I0712 00:27:21.508614 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6fdbd4fdd-jdrhw" podStartSLOduration=2.702749098 podStartE2EDuration="5.508597971s" podCreationTimestamp="2025-07-12 00:27:16 +0000 UTC" firstStartedPulling="2025-07-12 00:27:17.323078566 +0000 UTC m=+35.955630864" lastFinishedPulling="2025-07-12 00:27:20.128927439 +0000 UTC m=+38.761479737" observedRunningTime="2025-07-12 00:27:20.637093479 +0000 UTC m=+39.269645777" watchObservedRunningTime="2025-07-12 00:27:21.508597971 +0000 UTC m=+40.141150269" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.519 [INFO][4339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.519 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" iface="eth0" netns="/var/run/netns/cni-e03f72f7-182c-f35d-4ac0-1060503fc4aa" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.519 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" iface="eth0" netns="/var/run/netns/cni-e03f72f7-182c-f35d-4ac0-1060503fc4aa" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.520 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" iface="eth0" netns="/var/run/netns/cni-e03f72f7-182c-f35d-4ac0-1060503fc4aa" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.520 [INFO][4339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.520 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.557 [INFO][4364] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.557 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.557 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.577 [WARNING][4364] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.577 [INFO][4364] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.580 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:21.585612 containerd[1535]: 2025-07-12 00:27:21.582 [INFO][4339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:21.588146 systemd[1]: run-netns-cni\x2de03f72f7\x2d182c\x2df35d\x2d4ac0\x2d1060503fc4aa.mount: Deactivated successfully. Jul 12 00:27:21.589186 containerd[1535]: time="2025-07-12T00:27:21.588732577Z" level=info msg="TearDown network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\" successfully" Jul 12 00:27:21.589186 containerd[1535]: time="2025-07-12T00:27:21.588765460Z" level=info msg="StopPodSandbox for \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\" returns successfully" Jul 12 00:27:21.589608 containerd[1535]: time="2025-07-12T00:27:21.589510810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vmwvl,Uid:d9b719b0-599b-4efc-90b7-09fca6dfcce5,Namespace:calico-system,Attempt:1,}" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.509 [INFO][4338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.510 [INFO][4338] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" iface="eth0" netns="/var/run/netns/cni-a83f2b1c-01a2-f16a-be75-cd6218dc5c9f" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.510 [INFO][4338] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" iface="eth0" netns="/var/run/netns/cni-a83f2b1c-01a2-f16a-be75-cd6218dc5c9f" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.516 [INFO][4338] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" iface="eth0" netns="/var/run/netns/cni-a83f2b1c-01a2-f16a-be75-cd6218dc5c9f" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.516 [INFO][4338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.516 [INFO][4338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.573 [INFO][4362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.573 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.580 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.590 [WARNING][4362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.591 [INFO][4362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.593 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:21.609918 containerd[1535]: 2025-07-12 00:27:21.596 [INFO][4338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:21.611523 containerd[1535]: time="2025-07-12T00:27:21.611491652Z" level=info msg="TearDown network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\" successfully" Jul 12 00:27:21.611523 containerd[1535]: time="2025-07-12T00:27:21.611518614Z" level=info msg="StopPodSandbox for \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\" returns successfully" Jul 12 00:27:21.613480 containerd[1535]: time="2025-07-12T00:27:21.613130564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-2znr7,Uid:f68a8a6e-2029-41f7-af68-f0e9a3b5f706,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:27:21.615406 systemd[1]: run-netns-cni\x2da83f2b1c\x2d01a2\x2df16a\x2dbe75\x2dcd6218dc5c9f.mount: Deactivated successfully. Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.537 [INFO][4337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.537 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" iface="eth0" netns="/var/run/netns/cni-78774797-6454-f3e8-a4c8-e985d927255b" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.538 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" iface="eth0" netns="/var/run/netns/cni-78774797-6454-f3e8-a4c8-e985d927255b" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.538 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" iface="eth0" netns="/var/run/netns/cni-78774797-6454-f3e8-a4c8-e985d927255b" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.538 [INFO][4337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.538 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.574 [INFO][4375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.574 [INFO][4375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.593 [INFO][4375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.608 [WARNING][4375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.608 [INFO][4375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.612 [INFO][4375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:21.621610 containerd[1535]: 2025-07-12 00:27:21.616 [INFO][4337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:21.623206 containerd[1535]: time="2025-07-12T00:27:21.622471312Z" level=info msg="TearDown network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\" successfully" Jul 12 00:27:21.623206 containerd[1535]: time="2025-07-12T00:27:21.622778421Z" level=info msg="StopPodSandbox for \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\" returns successfully" Jul 12 00:27:21.624796 containerd[1535]: time="2025-07-12T00:27:21.624649235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f945c777-w6fsx,Uid:e5ce9aa9-2aec-4285-9ae0-962553767dc1,Namespace:calico-system,Attempt:1,}" Jul 12 00:27:21.626771 systemd[1]: run-netns-cni\x2d78774797\x2d6454\x2df3e8\x2da4c8\x2de985d927255b.mount: Deactivated successfully. Jul 12 00:27:21.763599 systemd-networkd[1234]: cali9bbb008262f: Link UP Jul 12 00:27:21.763835 systemd-networkd[1234]: cali9bbb008262f: Gained carrier Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.672 [INFO][4388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0 goldmane-58fd7646b9- calico-system d9b719b0-599b-4efc-90b7-09fca6dfcce5 969 0 2025-07-12 00:27:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-vmwvl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9bbb008262f [] [] }} ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.673 [INFO][4388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.706 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" HandleID="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.707 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" HandleID="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-vmwvl", "timestamp":"2025-07-12 00:27:21.706852353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.707 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.707 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.707 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.725 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.731 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.737 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.739 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.743 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.743 [INFO][4436] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.744 [INFO][4436] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.748 [INFO][4436] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.754 [INFO][4436] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.754 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" host="localhost" Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.754 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:21.783299 containerd[1535]: 2025-07-12 00:27:21.754 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" HandleID="k8s-pod-network.35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.783859 containerd[1535]: 2025-07-12 00:27:21.760 [INFO][4388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d9b719b0-599b-4efc-90b7-09fca6dfcce5", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-vmwvl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bbb008262f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:21.783859 containerd[1535]: 2025-07-12 00:27:21.760 [INFO][4388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.783859 containerd[1535]: 2025-07-12 00:27:21.760 [INFO][4388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bbb008262f ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.783859 containerd[1535]: 2025-07-12 00:27:21.765 [INFO][4388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.783859 containerd[1535]: 2025-07-12 00:27:21.767 [INFO][4388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d9b719b0-599b-4efc-90b7-09fca6dfcce5", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb", Pod:"goldmane-58fd7646b9-vmwvl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bbb008262f", MAC:"92:94:1e:f9:e9:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:21.783859 containerd[1535]: 2025-07-12 00:27:21.780 [INFO][4388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb" Namespace="calico-system" Pod="goldmane-58fd7646b9-vmwvl" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:21.800227 containerd[1535]: time="2025-07-12T00:27:21.800047292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:21.800227 containerd[1535]: time="2025-07-12T00:27:21.800132940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:21.800227 containerd[1535]: time="2025-07-12T00:27:21.800145101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:21.801436 containerd[1535]: time="2025-07-12T00:27:21.800337239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:21.835602 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:21.863559 systemd-networkd[1234]: calicf32c8d9cf3: Link UP Jul 12 00:27:21.864034 systemd-networkd[1234]: calicf32c8d9cf3: Gained carrier Jul 12 00:27:21.866763 containerd[1535]: time="2025-07-12T00:27:21.866709646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vmwvl,Uid:d9b719b0-599b-4efc-90b7-09fca6dfcce5,Namespace:calico-system,Attempt:1,} returns sandbox id \"35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb\"" Jul 12 00:27:21.875866 containerd[1535]: time="2025-07-12T00:27:21.875815172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.671 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0 calico-apiserver-59f6799769- calico-apiserver f68a8a6e-2029-41f7-af68-f0e9a3b5f706 968 0 2025-07-12 00:26:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f6799769 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f6799769-2znr7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf32c8d9cf3 [] [] }} ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.671 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.717 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" HandleID="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.717 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" HandleID="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f6799769-2znr7", "timestamp":"2025-07-12 00:27:21.717278761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.717 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.754 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.754 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.826 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.831 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.838 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.840 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.843 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.844 [INFO][4430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.845 [INFO][4430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101 Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.850 [INFO][4430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.856 [INFO][4430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.856 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" host="localhost" Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.856 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:21.914994 containerd[1535]: 2025-07-12 00:27:21.856 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" HandleID="k8s-pod-network.15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.915568 containerd[1535]: 2025-07-12 00:27:21.858 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"f68a8a6e-2029-41f7-af68-f0e9a3b5f706", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f6799769-2znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf32c8d9cf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:21.915568 containerd[1535]: 2025-07-12 00:27:21.858 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.915568 containerd[1535]: 2025-07-12 00:27:21.858 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf32c8d9cf3 ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.915568 containerd[1535]: 2025-07-12 00:27:21.864 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.915568 containerd[1535]: 2025-07-12 00:27:21.870 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"f68a8a6e-2029-41f7-af68-f0e9a3b5f706", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101", Pod:"calico-apiserver-59f6799769-2znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf32c8d9cf3", MAC:"b6:e0:11:89:2e:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:21.915568 containerd[1535]: 2025-07-12 00:27:21.912 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-2znr7" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:21.922172 kubelet[2607]: I0712 00:27:21.922137 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:27:21.942751 containerd[1535]: time="2025-07-12T00:27:21.942600738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:21.942945 containerd[1535]: time="2025-07-12T00:27:21.942889244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:21.943157 containerd[1535]: time="2025-07-12T00:27:21.943032938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:21.943391 containerd[1535]: time="2025-07-12T00:27:21.943343367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:21.973738 systemd-networkd[1234]: calif32bfabfaeb: Link UP Jul 12 00:27:21.974680 systemd-networkd[1234]: calif32bfabfaeb: Gained carrier Jul 12 00:27:21.977060 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.691 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0 calico-kube-controllers-79f945c777- calico-system e5ce9aa9-2aec-4285-9ae0-962553767dc1 970 0 2025-07-12 00:27:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79f945c777 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79f945c777-w6fsx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif32bfabfaeb [] [] }} ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.691 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.736 [INFO][4445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" HandleID="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.737 [INFO][4445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" HandleID="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001367b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79f945c777-w6fsx", "timestamp":"2025-07-12 00:27:21.736475185 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.737 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.856 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.856 [INFO][4445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.926 [INFO][4445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.937 [INFO][4445] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.943 [INFO][4445] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.946 [INFO][4445] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.949 [INFO][4445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.949 [INFO][4445] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.952 [INFO][4445] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5 Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.960 [INFO][4445] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.966 [INFO][4445] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.966 [INFO][4445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" host="localhost" Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.966 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:21.990904 containerd[1535]: 2025-07-12 00:27:21.966 [INFO][4445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" HandleID="k8s-pod-network.cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.991488 containerd[1535]: 2025-07-12 00:27:21.969 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0", GenerateName:"calico-kube-controllers-79f945c777-", Namespace:"calico-system", SelfLink:"", UID:"e5ce9aa9-2aec-4285-9ae0-962553767dc1", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f945c777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79f945c777-w6fsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif32bfabfaeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:21.991488 containerd[1535]: 2025-07-12 00:27:21.969 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.991488 containerd[1535]: 2025-07-12 00:27:21.969 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif32bfabfaeb ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.991488 containerd[1535]: 2025-07-12 00:27:21.973 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:21.991488 containerd[1535]: 2025-07-12 00:27:21.973 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0", GenerateName:"calico-kube-controllers-79f945c777-", Namespace:"calico-system", SelfLink:"", UID:"e5ce9aa9-2aec-4285-9ae0-962553767dc1", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f945c777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5", Pod:"calico-kube-controllers-79f945c777-w6fsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif32bfabfaeb", MAC:"7a:35:5e:15:ef:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:21.991488 containerd[1535]: 2025-07-12 00:27:21.988 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5" Namespace="calico-system" Pod="calico-kube-controllers-79f945c777-w6fsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:22.011020 containerd[1535]: time="2025-07-12T00:27:22.010962065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-2znr7,Uid:f68a8a6e-2029-41f7-af68-f0e9a3b5f706,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101\"" Jul 12 00:27:22.017945 containerd[1535]: time="2025-07-12T00:27:22.017731996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:22.017945 containerd[1535]: time="2025-07-12T00:27:22.017918173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:22.018108 containerd[1535]: time="2025-07-12T00:27:22.017970698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:22.018143 containerd[1535]: time="2025-07-12T00:27:22.018096189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:22.041723 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:22.090167 containerd[1535]: time="2025-07-12T00:27:22.090123018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f945c777-w6fsx,Uid:e5ce9aa9-2aec-4285-9ae0-962553767dc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5\"" Jul 12 00:27:22.453468 containerd[1535]: time="2025-07-12T00:27:22.453424168Z" level=info msg="StopPodSandbox for \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\"" Jul 12 00:27:22.454434 containerd[1535]: time="2025-07-12T00:27:22.454367813Z" level=info msg="StopPodSandbox for \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\"" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" iface="eth0" netns="/var/run/netns/cni-539f21b0-a00f-3684-4652-e0cd47db998d" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" iface="eth0" netns="/var/run/netns/cni-539f21b0-a00f-3684-4652-e0cd47db998d" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" iface="eth0" netns="/var/run/netns/cni-539f21b0-a00f-3684-4652-e0cd47db998d" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.525 [INFO][4703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.526 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.526 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.534 [WARNING][4703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.534 [INFO][4703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.536 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:22.539614 containerd[1535]: 2025-07-12 00:27:22.538 [INFO][4684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:22.540741 containerd[1535]: time="2025-07-12T00:27:22.539741208Z" level=info msg="TearDown network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\" successfully" Jul 12 00:27:22.540741 containerd[1535]: time="2025-07-12T00:27:22.539777171Z" level=info msg="StopPodSandbox for \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\" returns successfully" Jul 12 00:27:22.540801 kubelet[2607]: E0712 00:27:22.540123 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:22.541041 containerd[1535]: time="2025-07-12T00:27:22.540785663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjr7m,Uid:b4305437-9cc5-4131-a2db-7fb983a5778e,Namespace:kube-system,Attempt:1,}" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.503 [INFO][4690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4690] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" iface="eth0" netns="/var/run/netns/cni-2bfae88b-2050-ef51-83c0-60cdc9e23889" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4690] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" iface="eth0" netns="/var/run/netns/cni-2bfae88b-2050-ef51-83c0-60cdc9e23889" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4690] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" iface="eth0" netns="/var/run/netns/cni-2bfae88b-2050-ef51-83c0-60cdc9e23889" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.504 [INFO][4690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.525 [INFO][4702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.526 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.536 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.546 [WARNING][4702] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.546 [INFO][4702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.548 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:22.552326 containerd[1535]: 2025-07-12 00:27:22.550 [INFO][4690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:22.552930 containerd[1535]: time="2025-07-12T00:27:22.552895237Z" level=info msg="TearDown network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\" successfully" Jul 12 00:27:22.552968 containerd[1535]: time="2025-07-12T00:27:22.552929840Z" level=info msg="StopPodSandbox for \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\" returns successfully" Jul 12 00:27:22.554012 kubelet[2607]: E0712 00:27:22.553285 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:22.554109 containerd[1535]: time="2025-07-12T00:27:22.553970534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-prwgn,Uid:2ce3af4a-44f7-483b-9fa4-a5cd1f72b652,Namespace:kube-system,Attempt:1,}" Jul 12 00:27:22.679686 systemd-networkd[1234]: cali27dc5cc0caa: Link UP Jul 12 00:27:22.680166 systemd-networkd[1234]: cali27dc5cc0caa: Gained carrier Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.592 [INFO][4717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0 coredns-7c65d6cfc9- kube-system b4305437-9cc5-4131-a2db-7fb983a5778e 989 0 2025-07-12 00:26:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gjr7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27dc5cc0caa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.592 [INFO][4717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.626 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" HandleID="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.626 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" HandleID="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gjr7m", "timestamp":"2025-07-12 00:27:22.626527571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.626 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.626 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.626 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.637 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.642 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.647 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.651 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.655 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.655 [INFO][4743] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.657 [INFO][4743] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56 Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.663 [INFO][4743] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.672 [INFO][4743] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.672 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" host="localhost" Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.672 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:22.745130 containerd[1535]: 2025-07-12 00:27:22.672 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" HandleID="k8s-pod-network.86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.746977 containerd[1535]: 2025-07-12 00:27:22.676 [INFO][4717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b4305437-9cc5-4131-a2db-7fb983a5778e", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gjr7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27dc5cc0caa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:22.746977 containerd[1535]: 2025-07-12 00:27:22.676 [INFO][4717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.746977 containerd[1535]: 2025-07-12 00:27:22.676 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27dc5cc0caa ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.746977 containerd[1535]: 2025-07-12 00:27:22.682 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.746977 containerd[1535]: 2025-07-12 00:27:22.682 [INFO][4717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b4305437-9cc5-4131-a2db-7fb983a5778e", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56", Pod:"coredns-7c65d6cfc9-gjr7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27dc5cc0caa", MAC:"76:c1:ce:38:99:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:22.746977 containerd[1535]: 2025-07-12 00:27:22.740 [INFO][4717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjr7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:22.819508 systemd[1]: run-netns-cni\x2d2bfae88b\x2d2050\x2def51\x2d83c0\x2d60cdc9e23889.mount: Deactivated successfully. Jul 12 00:27:22.819651 systemd[1]: run-netns-cni\x2d539f21b0\x2da00f\x2d3684\x2d4652\x2de0cd47db998d.mount: Deactivated successfully. Jul 12 00:27:22.834475 containerd[1535]: time="2025-07-12T00:27:22.833989678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:22.834475 containerd[1535]: time="2025-07-12T00:27:22.834040003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:22.834475 containerd[1535]: time="2025-07-12T00:27:22.834055364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:22.834475 containerd[1535]: time="2025-07-12T00:27:22.834246981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:22.860403 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:22.881589 systemd-networkd[1234]: cali03d92d7cc10: Link UP Jul 12 00:27:22.882470 systemd-networkd[1234]: cali03d92d7cc10: Gained carrier Jul 12 00:27:22.891159 containerd[1535]: time="2025-07-12T00:27:22.891064876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjr7m,Uid:b4305437-9cc5-4131-a2db-7fb983a5778e,Namespace:kube-system,Attempt:1,} returns sandbox id \"86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56\"" Jul 12 00:27:22.893140 kubelet[2607]: E0712 00:27:22.893104 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:22.896311 containerd[1535]: time="2025-07-12T00:27:22.895730057Z" level=info msg="CreateContainer within sandbox \"86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.601 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0 coredns-7c65d6cfc9- kube-system 2ce3af4a-44f7-483b-9fa4-a5cd1f72b652 990 0 2025-07-12 00:26:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-prwgn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali03d92d7cc10 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.602 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.639 [INFO][4750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" HandleID="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.639 [INFO][4750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" HandleID="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136780), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-prwgn", "timestamp":"2025-07-12 00:27:22.639302285 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.639 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.672 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.672 [INFO][4750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.743 [INFO][4750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.750 [INFO][4750] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.755 [INFO][4750] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.758 [INFO][4750] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.760 [INFO][4750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.761 [INFO][4750] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.764 [INFO][4750] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5 Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.813 [INFO][4750] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.872 [INFO][4750] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.872 [INFO][4750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" host="localhost" Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.872 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:22.902844 containerd[1535]: 2025-07-12 00:27:22.872 [INFO][4750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" HandleID="k8s-pod-network.a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.904924 containerd[1535]: 2025-07-12 00:27:22.879 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-prwgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03d92d7cc10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:22.904924 containerd[1535]: 2025-07-12 00:27:22.879 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.904924 containerd[1535]: 2025-07-12 00:27:22.879 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03d92d7cc10 ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.904924 containerd[1535]: 2025-07-12 00:27:22.882 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.904924 containerd[1535]: 2025-07-12 00:27:22.883 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5", Pod:"coredns-7c65d6cfc9-prwgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03d92d7cc10", MAC:"52:e6:9f:61:fe:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:22.904924 containerd[1535]: 2025-07-12 00:27:22.898 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-prwgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:22.935149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015133470.mount: Deactivated successfully. Jul 12 00:27:22.951499 containerd[1535]: time="2025-07-12T00:27:22.951424210Z" level=info msg="CreateContainer within sandbox \"86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"401f1fb47cde670b1e163f5e8c3fc6296a8c1629b1b496b428bf0378ea5da562\"" Jul 12 00:27:22.951737 containerd[1535]: time="2025-07-12T00:27:22.950933686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:22.951737 containerd[1535]: time="2025-07-12T00:27:22.950995932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:22.951737 containerd[1535]: time="2025-07-12T00:27:22.951011253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:22.951737 containerd[1535]: time="2025-07-12T00:27:22.951100861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:22.953041 containerd[1535]: time="2025-07-12T00:27:22.952587555Z" level=info msg="StartContainer for \"401f1fb47cde670b1e163f5e8c3fc6296a8c1629b1b496b428bf0378ea5da562\"" Jul 12 00:27:22.996670 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:23.037512 containerd[1535]: time="2025-07-12T00:27:23.037383809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-prwgn,Uid:2ce3af4a-44f7-483b-9fa4-a5cd1f72b652,Namespace:kube-system,Attempt:1,} returns sandbox id \"a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5\"" Jul 12 00:27:23.038417 kubelet[2607]: E0712 00:27:23.038389 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:23.040875 containerd[1535]: time="2025-07-12T00:27:23.040818991Z" level=info msg="CreateContainer within sandbox \"a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:27:23.165570 systemd-networkd[1234]: cali9bbb008262f: Gained IPv6LL Jul 12 00:27:23.170398 containerd[1535]: time="2025-07-12T00:27:23.170251458Z" level=info msg="StartContainer for \"401f1fb47cde670b1e163f5e8c3fc6296a8c1629b1b496b428bf0378ea5da562\" returns successfully" Jul 12 00:27:23.221205 containerd[1535]: time="2025-07-12T00:27:23.220956799Z" level=info msg="CreateContainer within sandbox \"a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01c5fe7b846f6bd43ccff556c45f89e2cff9f9678a4d29f8b1c7279b82277389\"" Jul 12 00:27:23.222745 containerd[1535]: time="2025-07-12T00:27:23.222331800Z" level=info msg="StartContainer for \"01c5fe7b846f6bd43ccff556c45f89e2cff9f9678a4d29f8b1c7279b82277389\"" Jul 12 00:27:23.301296 containerd[1535]: time="2025-07-12T00:27:23.301168655Z" level=info msg="StartContainer for \"01c5fe7b846f6bd43ccff556c45f89e2cff9f9678a4d29f8b1c7279b82277389\" returns successfully" Jul 12 00:27:23.484371 systemd-networkd[1234]: calicf32c8d9cf3: Gained IPv6LL Jul 12 00:27:23.626948 kubelet[2607]: E0712 00:27:23.626812 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:23.629256 kubelet[2607]: E0712 00:27:23.629183 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:23.671326 kubelet[2607]: I0712 00:27:23.668535 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-prwgn" podStartSLOduration=34.668515492 podStartE2EDuration="34.668515492s" podCreationTimestamp="2025-07-12 00:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:27:23.653946171 +0000 UTC m=+42.286498509" watchObservedRunningTime="2025-07-12 00:27:23.668515492 +0000 UTC m=+42.301067790" Jul 12 00:27:23.694210 kubelet[2607]: I0712 00:27:23.693878 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gjr7m" podStartSLOduration=34.693857642 podStartE2EDuration="34.693857642s" podCreationTimestamp="2025-07-12 00:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:27:23.67598791 +0000 UTC m=+42.308540248" watchObservedRunningTime="2025-07-12 00:27:23.693857642 +0000 UTC m=+42.326409940" Jul 12 00:27:23.996533 systemd-networkd[1234]: calif32bfabfaeb: Gained IPv6LL Jul 12 00:27:24.161340 containerd[1535]: time="2025-07-12T00:27:24.161288083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:24.162609 containerd[1535]: time="2025-07-12T00:27:24.162569592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:27:24.164565 containerd[1535]: time="2025-07-12T00:27:24.164511679Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:24.172333 containerd[1535]: time="2025-07-12T00:27:24.172275145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:24.173198 containerd[1535]: time="2025-07-12T00:27:24.173154420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.297293924s" Jul 12 00:27:24.173198 containerd[1535]: time="2025-07-12T00:27:24.173190823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:27:24.176251 containerd[1535]: time="2025-07-12T00:27:24.176113794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:27:24.178718 containerd[1535]: time="2025-07-12T00:27:24.178683014Z" level=info msg="CreateContainer within sandbox \"35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:27:24.198477 containerd[1535]: time="2025-07-12T00:27:24.197741848Z" level=info msg="CreateContainer within sandbox \"35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ada4dee9fcdeebb4ff7930a4b1dae5cde72499e6e8c5b8a2d95e12aab9da8da3\"" Jul 12 00:27:24.199348 containerd[1535]: time="2025-07-12T00:27:24.199314663Z" level=info msg="StartContainer for \"ada4dee9fcdeebb4ff7930a4b1dae5cde72499e6e8c5b8a2d95e12aab9da8da3\"" Jul 12 00:27:24.259557 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:43864.service - OpenSSH per-connection server daemon (10.0.0.1:43864). Jul 12 00:27:24.307577 containerd[1535]: time="2025-07-12T00:27:24.307527900Z" level=info msg="StartContainer for \"ada4dee9fcdeebb4ff7930a4b1dae5cde72499e6e8c5b8a2d95e12aab9da8da3\" returns successfully" Jul 12 00:27:24.318036 sshd[4978]: Accepted publickey for core from 10.0.0.1 port 43864 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:24.320394 sshd[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:24.326874 systemd-logind[1519]: New session 8 of user core. Jul 12 00:27:24.332967 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:27:24.382225 systemd-networkd[1234]: cali27dc5cc0caa: Gained IPv6LL Jul 12 00:27:24.563505 sshd[4978]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:24.567144 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:43864.service: Deactivated successfully. Jul 12 00:27:24.569809 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:27:24.570564 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:27:24.571726 systemd-logind[1519]: Removed session 8. Jul 12 00:27:24.636058 kubelet[2607]: E0712 00:27:24.636019 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:24.638452 systemd-networkd[1234]: cali03d92d7cc10: Gained IPv6LL Jul 12 00:27:24.639406 kubelet[2607]: E0712 00:27:24.638952 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:24.653658 kubelet[2607]: I0712 00:27:24.653424 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-vmwvl" podStartSLOduration=19.352133155 podStartE2EDuration="21.653405233s" podCreationTimestamp="2025-07-12 00:27:03 +0000 UTC" firstStartedPulling="2025-07-12 00:27:21.873994723 +0000 UTC m=+40.506546981" lastFinishedPulling="2025-07-12 00:27:24.175266761 +0000 UTC m=+42.807819059" observedRunningTime="2025-07-12 00:27:24.653129409 +0000 UTC m=+43.285681747" watchObservedRunningTime="2025-07-12 00:27:24.653405233 +0000 UTC m=+43.285957491" Jul 12 00:27:25.454516 containerd[1535]: time="2025-07-12T00:27:25.454352107Z" level=info msg="StopPodSandbox for \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\"" Jul 12 00:27:25.456904 containerd[1535]: time="2025-07-12T00:27:25.454403751Z" level=info msg="StopPodSandbox for \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\"" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.518 [INFO][5032] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.519 [INFO][5032] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" iface="eth0" netns="/var/run/netns/cni-c5bafaa7-9a73-9498-a94e-6475b8ac4f60" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.519 [INFO][5032] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" iface="eth0" netns="/var/run/netns/cni-c5bafaa7-9a73-9498-a94e-6475b8ac4f60" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.519 [INFO][5032] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" iface="eth0" netns="/var/run/netns/cni-c5bafaa7-9a73-9498-a94e-6475b8ac4f60" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.519 [INFO][5032] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.519 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.556 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.556 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.556 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.571 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.571 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.574 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:25.592331 containerd[1535]: 2025-07-12 00:27:25.578 [INFO][5032] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:25.600035 systemd[1]: run-netns-cni\x2dc5bafaa7\x2d9a73\x2d9498\x2da94e\x2d6475b8ac4f60.mount: Deactivated successfully. Jul 12 00:27:25.601980 containerd[1535]: time="2025-07-12T00:27:25.600101896Z" level=info msg="TearDown network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\" successfully" Jul 12 00:27:25.601980 containerd[1535]: time="2025-07-12T00:27:25.600134659Z" level=info msg="StopPodSandbox for \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\" returns successfully" Jul 12 00:27:25.601980 containerd[1535]: time="2025-07-12T00:27:25.601260313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8ld7,Uid:37964168-0f35-42e8-bcb1-d8b1fcfa1415,Namespace:calico-system,Attempt:1,}" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.529 [INFO][5042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.529 [INFO][5042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" iface="eth0" netns="/var/run/netns/cni-d4117474-c516-6cca-74a2-77a0ab08a566" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.529 [INFO][5042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" iface="eth0" netns="/var/run/netns/cni-d4117474-c516-6cca-74a2-77a0ab08a566" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.529 [INFO][5042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" iface="eth0" netns="/var/run/netns/cni-d4117474-c516-6cca-74a2-77a0ab08a566" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.529 [INFO][5042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.530 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.582 [INFO][5059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.583 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.583 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.603 [WARNING][5059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.603 [INFO][5059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.606 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:25.618855 containerd[1535]: 2025-07-12 00:27:25.615 [INFO][5042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:25.619356 containerd[1535]: time="2025-07-12T00:27:25.619034280Z" level=info msg="TearDown network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\" successfully" Jul 12 00:27:25.619356 containerd[1535]: time="2025-07-12T00:27:25.619062122Z" level=info msg="StopPodSandbox for \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\" returns successfully" Jul 12 00:27:25.621669 systemd[1]: run-netns-cni\x2dd4117474\x2dc516\x2d6cca\x2d74a2\x2d77a0ab08a566.mount: Deactivated successfully. Jul 12 00:27:25.622532 containerd[1535]: time="2025-07-12T00:27:25.622162461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-qhdzl,Uid:3f5a9e1e-e147-4c32-bfe2-69710b77be5f,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:27:25.638669 kubelet[2607]: E0712 00:27:25.638622 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:25.639533 kubelet[2607]: E0712 00:27:25.639321 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:27:26.123287 systemd-networkd[1234]: caliaa39f251ec4: Link UP Jul 12 00:27:26.123491 systemd-networkd[1234]: caliaa39f251ec4: Gained carrier Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.005 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0 calico-apiserver-59f6799769- calico-apiserver 3f5a9e1e-e147-4c32-bfe2-69710b77be5f 1077 0 2025-07-12 00:26:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f6799769 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f6799769-qhdzl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa39f251ec4 [] [] }} ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.006 [INFO][5104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.042 [INFO][5124] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" HandleID="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.042 [INFO][5124] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" HandleID="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f6799769-qhdzl", "timestamp":"2025-07-12 00:27:26.04210134 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.042 [INFO][5124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.042 [INFO][5124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.042 [INFO][5124] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.054 [INFO][5124] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.059 [INFO][5124] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.063 [INFO][5124] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.066 [INFO][5124] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.069 [INFO][5124] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.069 [INFO][5124] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.073 [INFO][5124] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.093 [INFO][5124] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.112 [INFO][5124] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.112 [INFO][5124] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" host="localhost" Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.112 [INFO][5124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:26.158279 containerd[1535]: 2025-07-12 00:27:26.112 [INFO][5124] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" HandleID="k8s-pod-network.2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:26.159693 containerd[1535]: 2025-07-12 00:27:26.117 [INFO][5104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f5a9e1e-e147-4c32-bfe2-69710b77be5f", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f6799769-qhdzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa39f251ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:26.159693 containerd[1535]: 2025-07-12 00:27:26.117 [INFO][5104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:26.159693 containerd[1535]: 2025-07-12 00:27:26.117 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa39f251ec4 ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:26.159693 containerd[1535]: 2025-07-12 00:27:26.124 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:26.159693 containerd[1535]: 2025-07-12 00:27:26.125 [INFO][5104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f5a9e1e-e147-4c32-bfe2-69710b77be5f", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f", Pod:"calico-apiserver-59f6799769-qhdzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa39f251ec4", MAC:"4a:e3:6c:2d:69:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:26.159693 containerd[1535]: 2025-07-12 00:27:26.153 [INFO][5104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f" Namespace="calico-apiserver" Pod="calico-apiserver-59f6799769-qhdzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:26.191613 containerd[1535]: time="2025-07-12T00:27:26.190974217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:26.191613 containerd[1535]: time="2025-07-12T00:27:26.191030022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:26.191613 containerd[1535]: time="2025-07-12T00:27:26.191062304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:26.191613 containerd[1535]: time="2025-07-12T00:27:26.191259720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:26.222100 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:26.247707 systemd-networkd[1234]: cali86f110e5f45: Link UP Jul 12 00:27:26.249074 systemd-networkd[1234]: cali86f110e5f45: Gained carrier Jul 12 00:27:26.268163 containerd[1535]: time="2025-07-12T00:27:26.267822892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6799769-qhdzl,Uid:3f5a9e1e-e147-4c32-bfe2-69710b77be5f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f\"" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.007 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g8ld7-eth0 csi-node-driver- calico-system 37964168-0f35-42e8-bcb1-d8b1fcfa1415 1076 0 2025-07-12 00:27:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-g8ld7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali86f110e5f45 [] [] }} ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.007 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.045 [INFO][5129] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" HandleID="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.045 [INFO][5129] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" HandleID="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058e430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g8ld7", "timestamp":"2025-07-12 00:27:26.045093304 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.045 [INFO][5129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.113 [INFO][5129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.113 [INFO][5129] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.157 [INFO][5129] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.163 [INFO][5129] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.169 [INFO][5129] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.171 [INFO][5129] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.174 [INFO][5129] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.174 [INFO][5129] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.176 [INFO][5129] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80 Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.188 [INFO][5129] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.238 [INFO][5129] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.238 [INFO][5129] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" host="localhost" Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.238 [INFO][5129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:26.276755 containerd[1535]: 2025-07-12 00:27:26.238 [INFO][5129] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" HandleID="k8s-pod-network.68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:26.277522 containerd[1535]: 2025-07-12 00:27:26.243 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g8ld7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37964168-0f35-42e8-bcb1-d8b1fcfa1415", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g8ld7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86f110e5f45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:26.277522 containerd[1535]: 2025-07-12 00:27:26.243 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:26.277522 containerd[1535]: 2025-07-12 00:27:26.243 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86f110e5f45 ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:26.277522 containerd[1535]: 2025-07-12 00:27:26.249 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:26.277522 containerd[1535]: 2025-07-12 00:27:26.252 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g8ld7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37964168-0f35-42e8-bcb1-d8b1fcfa1415", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80", Pod:"csi-node-driver-g8ld7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86f110e5f45", MAC:"8a:be:2c:7e:22:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:26.277522 containerd[1535]: 2025-07-12 00:27:26.265 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80" Namespace="calico-system" Pod="csi-node-driver-g8ld7" WorkloadEndpoint="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:26.307226 containerd[1535]: time="2025-07-12T00:27:26.307110821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:26.307226 containerd[1535]: time="2025-07-12T00:27:26.307173266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:26.307226 containerd[1535]: time="2025-07-12T00:27:26.307188307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:26.308003 containerd[1535]: time="2025-07-12T00:27:26.307807478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:26.336722 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:27:26.412481 containerd[1535]: time="2025-07-12T00:27:26.408428614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8ld7,Uid:37964168-0f35-42e8-bcb1-d8b1fcfa1415,Namespace:calico-system,Attempt:1,} returns sandbox id \"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80\"" Jul 12 00:27:26.541204 containerd[1535]: time="2025-07-12T00:27:26.541160093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:26.542154 containerd[1535]: time="2025-07-12T00:27:26.542026724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:27:26.544054 containerd[1535]: time="2025-07-12T00:27:26.544009006Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:26.546987 containerd[1535]: time="2025-07-12T00:27:26.546954966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:26.556282 containerd[1535]: time="2025-07-12T00:27:26.556217443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.380054444s" Jul 12 00:27:26.556401 containerd[1535]: time="2025-07-12T00:27:26.556289488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:27:26.558978 containerd[1535]: time="2025-07-12T00:27:26.558257609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:27:26.558978 containerd[1535]: time="2025-07-12T00:27:26.558828376Z" level=info msg="CreateContainer within sandbox \"15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:27:26.572833 containerd[1535]: time="2025-07-12T00:27:26.572775275Z" level=info msg="CreateContainer within sandbox \"15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c51e83f09d0ce3f5d4c402766ed7f8d4c2afc5907ebb00071df99bbb91faf047\"" Jul 12 00:27:26.574505 containerd[1535]: time="2025-07-12T00:27:26.573352882Z" level=info msg="StartContainer for \"c51e83f09d0ce3f5d4c402766ed7f8d4c2afc5907ebb00071df99bbb91faf047\"" Jul 12 00:27:26.643458 containerd[1535]: time="2025-07-12T00:27:26.643426364Z" level=info msg="StartContainer for \"c51e83f09d0ce3f5d4c402766ed7f8d4c2afc5907ebb00071df99bbb91faf047\" returns successfully" Jul 12 00:27:26.667527 kubelet[2607]: I0712 00:27:26.666588 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f6799769-2znr7" podStartSLOduration=23.123204338 podStartE2EDuration="27.666570094s" podCreationTimestamp="2025-07-12 00:26:59 +0000 UTC" firstStartedPulling="2025-07-12 00:27:22.014127031 +0000 UTC m=+40.646679289" lastFinishedPulling="2025-07-12 00:27:26.557492787 +0000 UTC m=+45.190045045" observedRunningTime="2025-07-12 00:27:26.666134698 +0000 UTC m=+45.298686996" watchObservedRunningTime="2025-07-12 00:27:26.666570094 +0000 UTC m=+45.299122392" Jul 12 00:27:27.644666 systemd-networkd[1234]: cali86f110e5f45: Gained IPv6LL Jul 12 00:27:27.644933 systemd-networkd[1234]: caliaa39f251ec4: Gained IPv6LL Jul 12 00:27:27.656623 kubelet[2607]: I0712 00:27:27.656593 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:27:28.567415 containerd[1535]: time="2025-07-12T00:27:28.567368078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:28.569382 containerd[1535]: time="2025-07-12T00:27:28.568031130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:27:28.569382 containerd[1535]: time="2025-07-12T00:27:28.568771348Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:28.571740 containerd[1535]: time="2025-07-12T00:27:28.571707257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:28.572961 containerd[1535]: time="2025-07-12T00:27:28.572805943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.014046933s" Jul 12 00:27:28.572961 containerd[1535]: time="2025-07-12T00:27:28.572839705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:27:28.575475 containerd[1535]: time="2025-07-12T00:27:28.575392025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:27:28.591911 containerd[1535]: time="2025-07-12T00:27:28.591865751Z" level=info msg="CreateContainer within sandbox \"cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:27:28.656550 containerd[1535]: time="2025-07-12T00:27:28.656503318Z" level=info msg="CreateContainer within sandbox \"cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d2ba8620942f95e1ef46c45ee572eda1e914e3ae8c0f4c1b92333420613fde17\"" Jul 12 00:27:28.657560 containerd[1535]: time="2025-07-12T00:27:28.657530078Z" level=info msg="StartContainer for \"d2ba8620942f95e1ef46c45ee572eda1e914e3ae8c0f4c1b92333420613fde17\"" Jul 12 00:27:28.738049 containerd[1535]: time="2025-07-12T00:27:28.738002201Z" level=info msg="StartContainer for \"d2ba8620942f95e1ef46c45ee572eda1e914e3ae8c0f4c1b92333420613fde17\" returns successfully" Jul 12 00:27:28.820086 containerd[1535]: time="2025-07-12T00:27:28.819968561Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:28.821047 containerd[1535]: time="2025-07-12T00:27:28.820772384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:27:28.839676 containerd[1535]: time="2025-07-12T00:27:28.839354435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 261.907009ms" Jul 12 00:27:28.839676 containerd[1535]: time="2025-07-12T00:27:28.839672380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:27:28.843540 containerd[1535]: time="2025-07-12T00:27:28.842501080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:27:28.847186 containerd[1535]: time="2025-07-12T00:27:28.847139003Z" level=info msg="CreateContainer within sandbox \"2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:27:28.867262 containerd[1535]: time="2025-07-12T00:27:28.865360025Z" level=info msg="CreateContainer within sandbox \"2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"febbbe0eb44694e928af91833b95589eb710263e45ac7943c953e8ae4389ff3c\"" Jul 12 00:27:28.868179 containerd[1535]: time="2025-07-12T00:27:28.868140322Z" level=info msg="StartContainer for \"febbbe0eb44694e928af91833b95589eb710263e45ac7943c953e8ae4389ff3c\"" Jul 12 00:27:28.994437 containerd[1535]: time="2025-07-12T00:27:28.994386020Z" level=info msg="StartContainer for \"febbbe0eb44694e928af91833b95589eb710263e45ac7943c953e8ae4389ff3c\" returns successfully" Jul 12 00:27:29.580651 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). Jul 12 00:27:29.650906 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:29.653374 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:29.658943 systemd-logind[1519]: New session 9 of user core. Jul 12 00:27:29.669450 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:27:29.736692 kubelet[2607]: I0712 00:27:29.736539 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f6799769-qhdzl" podStartSLOduration=28.167631414 podStartE2EDuration="30.736518451s" podCreationTimestamp="2025-07-12 00:26:59 +0000 UTC" firstStartedPulling="2025-07-12 00:27:26.273033038 +0000 UTC m=+44.905585336" lastFinishedPulling="2025-07-12 00:27:28.841920075 +0000 UTC m=+47.474472373" observedRunningTime="2025-07-12 00:27:29.711943972 +0000 UTC m=+48.344496270" watchObservedRunningTime="2025-07-12 00:27:29.736518451 +0000 UTC m=+48.369070749" Jul 12 00:27:29.738939 kubelet[2607]: I0712 00:27:29.737480 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79f945c777-w6fsx" podStartSLOduration=20.254689969 podStartE2EDuration="26.737469124s" podCreationTimestamp="2025-07-12 00:27:03 +0000 UTC" firstStartedPulling="2025-07-12 00:27:22.091444498 +0000 UTC m=+40.723996756" lastFinishedPulling="2025-07-12 00:27:28.574223613 +0000 UTC m=+47.206775911" observedRunningTime="2025-07-12 00:27:29.733762201 +0000 UTC m=+48.366314499" watchObservedRunningTime="2025-07-12 00:27:29.737469124 +0000 UTC m=+48.370021422" Jul 12 00:27:30.110300 containerd[1535]: time="2025-07-12T00:27:30.110257140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:30.112006 containerd[1535]: time="2025-07-12T00:27:30.111970988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:27:30.112520 containerd[1535]: time="2025-07-12T00:27:30.112495508Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:30.117212 containerd[1535]: time="2025-07-12T00:27:30.116989765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:30.118525 containerd[1535]: time="2025-07-12T00:27:30.118497958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.275959234s" Jul 12 00:27:30.118628 containerd[1535]: time="2025-07-12T00:27:30.118611326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:27:30.122987 containerd[1535]: time="2025-07-12T00:27:30.122897247Z" level=info msg="CreateContainer within sandbox \"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:27:30.142233 containerd[1535]: time="2025-07-12T00:27:30.142180812Z" level=info msg="CreateContainer within sandbox \"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"228a504a8a98a4b6a1db95a9ef770fd65cc9e042ccc14aa149ba31c7de0ba60f\"" Jul 12 00:27:30.144614 containerd[1535]: time="2025-07-12T00:27:30.142960271Z" level=info msg="StartContainer for \"228a504a8a98a4b6a1db95a9ef770fd65cc9e042ccc14aa149ba31c7de0ba60f\"" Jul 12 00:27:30.219826 containerd[1535]: time="2025-07-12T00:27:30.219742624Z" level=info msg="StartContainer for \"228a504a8a98a4b6a1db95a9ef770fd65cc9e042ccc14aa149ba31c7de0ba60f\" returns successfully" Jul 12 00:27:30.222126 containerd[1535]: time="2025-07-12T00:27:30.221988953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:27:30.287956 sshd[5407]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:30.298981 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:43874.service: Deactivated successfully. Jul 12 00:27:30.306484 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:27:30.309633 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:27:30.311772 systemd-logind[1519]: Removed session 9. Jul 12 00:27:30.683037 kubelet[2607]: I0712 00:27:30.682996 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:27:31.412967 containerd[1535]: time="2025-07-12T00:27:31.412912686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:31.415020 containerd[1535]: time="2025-07-12T00:27:31.414836187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:27:31.415020 containerd[1535]: time="2025-07-12T00:27:31.415009920Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:31.430438 containerd[1535]: time="2025-07-12T00:27:31.430363568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:27:31.431771 containerd[1535]: time="2025-07-12T00:27:31.431162707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.209131551s" Jul 12 00:27:31.431771 containerd[1535]: time="2025-07-12T00:27:31.431208350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:27:31.441103 containerd[1535]: time="2025-07-12T00:27:31.441044193Z" level=info msg="CreateContainer within sandbox \"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:27:31.496310 containerd[1535]: time="2025-07-12T00:27:31.496196208Z" level=info msg="CreateContainer within sandbox \"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1b4d5264c195c28215a05804599490d55815fcc8cadc3b81ec1705fc4b15bfa0\"" Jul 12 00:27:31.498495 containerd[1535]: time="2025-07-12T00:27:31.497802006Z" level=info msg="StartContainer for \"1b4d5264c195c28215a05804599490d55815fcc8cadc3b81ec1705fc4b15bfa0\"" Jul 12 00:27:31.560417 containerd[1535]: time="2025-07-12T00:27:31.560374965Z" level=info msg="StartContainer for \"1b4d5264c195c28215a05804599490d55815fcc8cadc3b81ec1705fc4b15bfa0\" returns successfully" Jul 12 00:27:31.737345 kubelet[2607]: I0712 00:27:31.736927 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g8ld7" podStartSLOduration=23.714281706 podStartE2EDuration="28.736911702s" podCreationTimestamp="2025-07-12 00:27:03 +0000 UTC" firstStartedPulling="2025-07-12 00:27:26.409582028 +0000 UTC m=+45.042134326" lastFinishedPulling="2025-07-12 00:27:31.432212064 +0000 UTC m=+50.064764322" observedRunningTime="2025-07-12 00:27:31.736626841 +0000 UTC m=+50.369179099" watchObservedRunningTime="2025-07-12 00:27:31.736911702 +0000 UTC m=+50.369463960" Jul 12 00:27:32.546941 kubelet[2607]: I0712 00:27:32.546891 2607 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:27:32.555283 kubelet[2607]: I0712 00:27:32.554902 2607 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:27:35.305544 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:39914.service - OpenSSH per-connection server daemon (10.0.0.1:39914). Jul 12 00:27:35.361589 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 39914 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:35.364038 sshd[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:35.368057 systemd-logind[1519]: New session 10 of user core. Jul 12 00:27:35.375560 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:27:35.593392 sshd[5530]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:35.601572 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:39922.service - OpenSSH per-connection server daemon (10.0.0.1:39922). Jul 12 00:27:35.606602 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:39914.service: Deactivated successfully. Jul 12 00:27:35.607324 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:27:35.609552 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:27:35.612334 systemd-logind[1519]: Removed session 10. Jul 12 00:27:35.636722 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 39922 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:35.643873 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:35.667266 systemd-logind[1519]: New session 11 of user core. Jul 12 00:27:35.675618 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:27:35.920364 sshd[5544]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:35.927497 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:39934.service - OpenSSH per-connection server daemon (10.0.0.1:39934). Jul 12 00:27:35.927879 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:39922.service: Deactivated successfully. Jul 12 00:27:35.930895 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:27:35.931049 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:27:35.932758 systemd-logind[1519]: Removed session 11. Jul 12 00:27:35.962220 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 39934 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:35.963621 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:35.978170 systemd-logind[1519]: New session 12 of user core. Jul 12 00:27:35.988952 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:27:36.113282 sshd[5559]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:36.117637 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:39934.service: Deactivated successfully. Jul 12 00:27:36.117692 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:27:36.119895 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:27:36.120541 systemd-logind[1519]: Removed session 12. Jul 12 00:27:41.129622 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:39946.service - OpenSSH per-connection server daemon (10.0.0.1:39946). Jul 12 00:27:41.163195 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 39946 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:41.164777 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:41.169336 systemd-logind[1519]: New session 13 of user core. Jul 12 00:27:41.173491 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:27:41.335427 sshd[5589]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:41.345489 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:39956.service - OpenSSH per-connection server daemon (10.0.0.1:39956). Jul 12 00:27:41.345907 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:39946.service: Deactivated successfully. Jul 12 00:27:41.349195 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:27:41.349976 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:27:41.351369 systemd-logind[1519]: Removed session 13. Jul 12 00:27:41.387572 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 39956 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:41.388407 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:41.409907 systemd-logind[1519]: New session 14 of user core. Jul 12 00:27:41.416544 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:27:41.447852 containerd[1535]: time="2025-07-12T00:27:41.447797095Z" level=info msg="StopPodSandbox for \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\"" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.488 [WARNING][5619] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b4305437-9cc5-4131-a2db-7fb983a5778e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56", Pod:"coredns-7c65d6cfc9-gjr7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27dc5cc0caa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.488 [INFO][5619] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.488 [INFO][5619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" iface="eth0" netns="" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.488 [INFO][5619] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.488 [INFO][5619] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.544 [INFO][5634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.547 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.549 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.561 [WARNING][5634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.561 [INFO][5634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.565 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:41.576351 containerd[1535]: 2025-07-12 00:27:41.572 [INFO][5619] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.576841 containerd[1535]: time="2025-07-12T00:27:41.576392038Z" level=info msg="TearDown network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\" successfully" Jul 12 00:27:41.576841 containerd[1535]: time="2025-07-12T00:27:41.576418360Z" level=info msg="StopPodSandbox for \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\" returns successfully" Jul 12 00:27:41.579686 containerd[1535]: time="2025-07-12T00:27:41.577921175Z" level=info msg="RemovePodSandbox for \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\"" Jul 12 00:27:41.592737 containerd[1535]: time="2025-07-12T00:27:41.592674029Z" level=info msg="Forcibly stopping sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\"" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.638 [WARNING][5653] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b4305437-9cc5-4131-a2db-7fb983a5778e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86193a8f94800b405710c73558735b8225d9b5c30c3b9196f4b65e91d9ef1d56", Pod:"coredns-7c65d6cfc9-gjr7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27dc5cc0caa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.639 [INFO][5653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.639 [INFO][5653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" iface="eth0" netns="" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.639 [INFO][5653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.639 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.664 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.664 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.664 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.673 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.673 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" HandleID="k8s-pod-network.808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Workload="localhost-k8s-coredns--7c65d6cfc9--gjr7m-eth0" Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.674 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:41.680288 containerd[1535]: 2025-07-12 00:27:41.676 [INFO][5653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c" Jul 12 00:27:41.680288 containerd[1535]: time="2025-07-12T00:27:41.678391778Z" level=info msg="TearDown network for sandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\" successfully" Jul 12 00:27:41.689498 sshd[5602]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:41.693819 containerd[1535]: time="2025-07-12T00:27:41.693781392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:41.693992 containerd[1535]: time="2025-07-12T00:27:41.693974924Z" level=info msg="RemovePodSandbox \"808d47c5a0211fa1e51f9f3f0702ddfc8c6a4be6398d17638c101516482a582c\" returns successfully" Jul 12 00:27:41.697480 containerd[1535]: time="2025-07-12T00:27:41.697452345Z" level=info msg="StopPodSandbox for \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\"" Jul 12 00:27:41.699703 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:39962.service - OpenSSH per-connection server daemon (10.0.0.1:39962). Jul 12 00:27:41.700102 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:39956.service: Deactivated successfully. Jul 12 00:27:41.705175 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:27:41.706352 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:27:41.708105 systemd-logind[1519]: Removed session 14. Jul 12 00:27:41.746013 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 39962 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:41.747159 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:41.753747 systemd-logind[1519]: New session 15 of user core. Jul 12 00:27:41.758536 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.746 [WARNING][5685] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5", Pod:"coredns-7c65d6cfc9-prwgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03d92d7cc10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.746 [INFO][5685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.746 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" iface="eth0" netns="" Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.746 [INFO][5685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.746 [INFO][5685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.782 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.782 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.782 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.792 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.792 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.794 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:41.800981 containerd[1535]: 2025-07-12 00:27:41.796 [INFO][5685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.800981 containerd[1535]: time="2025-07-12T00:27:41.800869374Z" level=info msg="TearDown network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\" successfully" Jul 12 00:27:41.800981 containerd[1535]: time="2025-07-12T00:27:41.800892735Z" level=info msg="StopPodSandbox for \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\" returns successfully" Jul 12 00:27:41.802567 containerd[1535]: time="2025-07-12T00:27:41.802007606Z" level=info msg="RemovePodSandbox for \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\"" Jul 12 00:27:41.802567 containerd[1535]: time="2025-07-12T00:27:41.802059209Z" level=info msg="Forcibly stopping sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\"" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.873 [WARNING][5716] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ce3af4a-44f7-483b-9fa4-a5cd1f72b652", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a92e38dc59ab9900966d5b6a661727f47326f61a3139888cc0b95061dceefea5", Pod:"coredns-7c65d6cfc9-prwgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03d92d7cc10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.875 [INFO][5716] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.875 [INFO][5716] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" iface="eth0" netns="" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.875 [INFO][5716] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.875 [INFO][5716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.917 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.917 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.917 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.927 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.927 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" HandleID="k8s-pod-network.64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Workload="localhost-k8s-coredns--7c65d6cfc9--prwgn-eth0" Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.928 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:41.934302 containerd[1535]: 2025-07-12 00:27:41.931 [INFO][5716] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108" Jul 12 00:27:41.935619 containerd[1535]: time="2025-07-12T00:27:41.934455993Z" level=info msg="TearDown network for sandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\" successfully" Jul 12 00:27:41.948059 containerd[1535]: time="2025-07-12T00:27:41.947896684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:41.948059 containerd[1535]: time="2025-07-12T00:27:41.947969689Z" level=info msg="RemovePodSandbox \"64032482afc19d33328f0cdb48e1849899c68fe0f0de79b476dca7ad814ff108\" returns successfully" Jul 12 00:27:41.948845 containerd[1535]: time="2025-07-12T00:27:41.948689455Z" level=info msg="StopPodSandbox for \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\"" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:41.985 [WARNING][5748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g8ld7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37964168-0f35-42e8-bcb1-d8b1fcfa1415", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80", Pod:"csi-node-driver-g8ld7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86f110e5f45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:41.985 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:41.985 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" iface="eth0" netns="" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:41.985 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:41.985 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:42.007 [INFO][5757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:42.007 [INFO][5757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:42.007 [INFO][5757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:42.015 [WARNING][5757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:42.016 [INFO][5757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:42.017 [INFO][5757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.024183 containerd[1535]: 2025-07-12 00:27:42.021 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.024771 containerd[1535]: time="2025-07-12T00:27:42.024220582Z" level=info msg="TearDown network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\" successfully" Jul 12 00:27:42.024771 containerd[1535]: time="2025-07-12T00:27:42.024257664Z" level=info msg="StopPodSandbox for \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\" returns successfully" Jul 12 00:27:42.024771 containerd[1535]: time="2025-07-12T00:27:42.024744094Z" level=info msg="RemovePodSandbox for \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\"" Jul 12 00:27:42.024846 containerd[1535]: time="2025-07-12T00:27:42.024773136Z" level=info msg="Forcibly stopping sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\"" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.066 [WARNING][5774] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g8ld7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37964168-0f35-42e8-bcb1-d8b1fcfa1415", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68d822a991dc6249f0015d8036b1569ceb3cd1b510b2f3de91b57a711d37db80", Pod:"csi-node-driver-g8ld7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86f110e5f45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.066 [INFO][5774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.066 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" iface="eth0" netns="" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.066 [INFO][5774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.066 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.097 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.098 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.098 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.109 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.109 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" HandleID="k8s-pod-network.cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Workload="localhost-k8s-csi--node--driver--g8ld7-eth0" Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.113 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.118811 containerd[1535]: 2025-07-12 00:27:42.117 [INFO][5774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c" Jul 12 00:27:42.119365 containerd[1535]: time="2025-07-12T00:27:42.118816266Z" level=info msg="TearDown network for sandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\" successfully" Jul 12 00:27:42.125015 containerd[1535]: time="2025-07-12T00:27:42.124978452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:42.125083 containerd[1535]: time="2025-07-12T00:27:42.125050256Z" level=info msg="RemovePodSandbox \"cdf628caf443a501e9ee3ad3d29ceae8c9c6074624b3c3e7a4641e8f5006195c\" returns successfully" Jul 12 00:27:42.125518 containerd[1535]: time="2025-07-12T00:27:42.125495084Z" level=info msg="StopPodSandbox for \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\"" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.167 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f5a9e1e-e147-4c32-bfe2-69710b77be5f", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f", Pod:"calico-apiserver-59f6799769-qhdzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa39f251ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.167 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.167 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" iface="eth0" netns="" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.167 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.167 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.199 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.201 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.201 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.212 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.212 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.214 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.219377 containerd[1535]: 2025-07-12 00:27:42.217 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.219377 containerd[1535]: time="2025-07-12T00:27:42.219350042Z" level=info msg="TearDown network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\" successfully" Jul 12 00:27:42.219377 containerd[1535]: time="2025-07-12T00:27:42.219379643Z" level=info msg="StopPodSandbox for \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\" returns successfully" Jul 12 00:27:42.220680 containerd[1535]: time="2025-07-12T00:27:42.219855233Z" level=info msg="RemovePodSandbox for \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\"" Jul 12 00:27:42.220680 containerd[1535]: time="2025-07-12T00:27:42.219881955Z" level=info msg="Forcibly stopping sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\"" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.319 [WARNING][5832] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f5a9e1e-e147-4c32-bfe2-69710b77be5f", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2042e61ec0dba31b20b2756e24fadd38ec831c94abb0aa277dc27025a0abc69f", Pod:"calico-apiserver-59f6799769-qhdzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa39f251ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.328 [INFO][5832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.328 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" iface="eth0" netns="" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.328 [INFO][5832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.328 [INFO][5832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.362 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.362 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.362 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.375 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.375 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" HandleID="k8s-pod-network.7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Workload="localhost-k8s-calico--apiserver--59f6799769--qhdzl-eth0" Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.376 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.381018 containerd[1535]: 2025-07-12 00:27:42.379 [INFO][5832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62" Jul 12 00:27:42.381467 containerd[1535]: time="2025-07-12T00:27:42.381058529Z" level=info msg="TearDown network for sandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\" successfully" Jul 12 00:27:42.386788 containerd[1535]: time="2025-07-12T00:27:42.386584275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:42.386788 containerd[1535]: time="2025-07-12T00:27:42.386665920Z" level=info msg="RemovePodSandbox \"7158bf3cd6ee74a62ebd382d3df1fe12aea5e257f6623c0fa6880b7bf7801f62\" returns successfully" Jul 12 00:27:42.387165 containerd[1535]: time="2025-07-12T00:27:42.387137749Z" level=info msg="StopPodSandbox for \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\"" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.439 [WARNING][5857] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" WorkloadEndpoint="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.439 [INFO][5857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.439 [INFO][5857] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" iface="eth0" netns="" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.439 [INFO][5857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.439 [INFO][5857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.462 [INFO][5865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.463 [INFO][5865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.463 [INFO][5865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.479 [WARNING][5865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.479 [INFO][5865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.482 [INFO][5865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.496670 containerd[1535]: 2025-07-12 00:27:42.491 [INFO][5857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.496670 containerd[1535]: time="2025-07-12T00:27:42.496408792Z" level=info msg="TearDown network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\" successfully" Jul 12 00:27:42.496670 containerd[1535]: time="2025-07-12T00:27:42.496433634Z" level=info msg="StopPodSandbox for \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\" returns successfully" Jul 12 00:27:42.497331 containerd[1535]: time="2025-07-12T00:27:42.496846500Z" level=info msg="RemovePodSandbox for \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\"" Jul 12 00:27:42.497331 containerd[1535]: time="2025-07-12T00:27:42.496873742Z" level=info msg="Forcibly stopping sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\"" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.583 [WARNING][5882] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" WorkloadEndpoint="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.583 [INFO][5882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.583 [INFO][5882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" iface="eth0" netns="" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.583 [INFO][5882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.583 [INFO][5882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.606 [INFO][5891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.606 [INFO][5891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.607 [INFO][5891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.623 [WARNING][5891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.623 [INFO][5891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" HandleID="k8s-pod-network.693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Workload="localhost-k8s-whisker--b7c4d74cb--6q9dz-eth0" Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.625 [INFO][5891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.630759 containerd[1535]: 2025-07-12 00:27:42.629 [INFO][5882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466" Jul 12 00:27:42.631259 containerd[1535]: time="2025-07-12T00:27:42.630789888Z" level=info msg="TearDown network for sandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\" successfully" Jul 12 00:27:42.640844 containerd[1535]: time="2025-07-12T00:27:42.639323182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:42.640844 containerd[1535]: time="2025-07-12T00:27:42.639398347Z" level=info msg="RemovePodSandbox \"693f7089bbeb103b60071b01fc343756ee76d482a41263979ca7eeca1f878466\" returns successfully" Jul 12 00:27:42.640844 containerd[1535]: time="2025-07-12T00:27:42.639846535Z" level=info msg="StopPodSandbox for \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\"" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.673 [WARNING][5909] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d9b719b0-599b-4efc-90b7-09fca6dfcce5", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb", Pod:"goldmane-58fd7646b9-vmwvl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bbb008262f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.673 [INFO][5909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.673 [INFO][5909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" iface="eth0" netns="" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.673 [INFO][5909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.673 [INFO][5909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.691 [INFO][5918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.691 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.691 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.699 [WARNING][5918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.699 [INFO][5918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.700 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.704296 containerd[1535]: 2025-07-12 00:27:42.702 [INFO][5909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.704730 containerd[1535]: time="2025-07-12T00:27:42.704330934Z" level=info msg="TearDown network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\" successfully" Jul 12 00:27:42.704730 containerd[1535]: time="2025-07-12T00:27:42.704358375Z" level=info msg="StopPodSandbox for \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\" returns successfully" Jul 12 00:27:42.709253 containerd[1535]: time="2025-07-12T00:27:42.704958013Z" level=info msg="RemovePodSandbox for \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\"" Jul 12 00:27:42.709253 containerd[1535]: time="2025-07-12T00:27:42.704997215Z" level=info msg="Forcibly stopping sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\"" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.745 [WARNING][5935] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d9b719b0-599b-4efc-90b7-09fca6dfcce5", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35ea808c9c46129d81026842b2526ed928d762138565824f3aaab7a4c0d1dabb", Pod:"goldmane-58fd7646b9-vmwvl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bbb008262f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.745 [INFO][5935] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.745 [INFO][5935] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" iface="eth0" netns="" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.745 [INFO][5935] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.745 [INFO][5935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.770 [INFO][5943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.770 [INFO][5943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.770 [INFO][5943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.790 [WARNING][5943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.790 [INFO][5943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" HandleID="k8s-pod-network.d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Workload="localhost-k8s-goldmane--58fd7646b9--vmwvl-eth0" Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.791 [INFO][5943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.800992 containerd[1535]: 2025-07-12 00:27:42.795 [INFO][5935] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9" Jul 12 00:27:42.800992 containerd[1535]: time="2025-07-12T00:27:42.800285023Z" level=info msg="TearDown network for sandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\" successfully" Jul 12 00:27:42.805387 containerd[1535]: time="2025-07-12T00:27:42.805354100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:42.805958 containerd[1535]: time="2025-07-12T00:27:42.805830490Z" level=info msg="RemovePodSandbox \"d5bcfd73c67515831a10ed9b741cacf400465a92a55b116c34a8eebf433b94a9\" returns successfully" Jul 12 00:27:42.807630 containerd[1535]: time="2025-07-12T00:27:42.807603601Z" level=info msg="StopPodSandbox for \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\"" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.846 [WARNING][5961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"f68a8a6e-2029-41f7-af68-f0e9a3b5f706", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101", Pod:"calico-apiserver-59f6799769-2znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf32c8d9cf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.846 [INFO][5961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.846 [INFO][5961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" iface="eth0" netns="" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.846 [INFO][5961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.846 [INFO][5961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.870 [INFO][5970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.870 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.871 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.882 [WARNING][5970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.882 [INFO][5970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.885 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:42.893923 containerd[1535]: 2025-07-12 00:27:42.891 [INFO][5961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:42.896293 containerd[1535]: time="2025-07-12T00:27:42.893965529Z" level=info msg="TearDown network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\" successfully" Jul 12 00:27:42.896293 containerd[1535]: time="2025-07-12T00:27:42.894002772Z" level=info msg="StopPodSandbox for \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\" returns successfully" Jul 12 00:27:42.896293 containerd[1535]: time="2025-07-12T00:27:42.894424918Z" level=info msg="RemovePodSandbox for \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\"" Jul 12 00:27:42.896293 containerd[1535]: time="2025-07-12T00:27:42.894451520Z" level=info msg="Forcibly stopping sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\"" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:42.935 [WARNING][5988] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0", GenerateName:"calico-apiserver-59f6799769-", Namespace:"calico-apiserver", SelfLink:"", UID:"f68a8a6e-2029-41f7-af68-f0e9a3b5f706", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6799769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15f194b327a6c2a13b9ded4a7cf76c3b58014468c7c410fdd39fbe783e36d101", Pod:"calico-apiserver-59f6799769-2znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf32c8d9cf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:42.938 [INFO][5988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:42.938 [INFO][5988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" iface="eth0" netns="" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:42.938 [INFO][5988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:42.938 [INFO][5988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:43.009 [INFO][5997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:43.009 [INFO][5997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:43.009 [INFO][5997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:43.021 [WARNING][5997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:43.021 [INFO][5997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" HandleID="k8s-pod-network.48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Workload="localhost-k8s-calico--apiserver--59f6799769--2znr7-eth0" Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:43.022 [INFO][5997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:43.028288 containerd[1535]: 2025-07-12 00:27:43.025 [INFO][5988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d" Jul 12 00:27:43.028288 containerd[1535]: time="2025-07-12T00:27:43.027126532Z" level=info msg="TearDown network for sandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\" successfully" Jul 12 00:27:43.033962 containerd[1535]: time="2025-07-12T00:27:43.033892111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:43.034111 containerd[1535]: time="2025-07-12T00:27:43.033974916Z" level=info msg="RemovePodSandbox \"48d17ad5036c88539a958cf49eb35ca7d3170b90c3f7263e2a32085483d9bc7d\" returns successfully" Jul 12 00:27:43.034510 containerd[1535]: time="2025-07-12T00:27:43.034477827Z" level=info msg="StopPodSandbox for \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\"" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.075 [WARNING][6014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0", GenerateName:"calico-kube-controllers-79f945c777-", Namespace:"calico-system", SelfLink:"", UID:"e5ce9aa9-2aec-4285-9ae0-962553767dc1", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f945c777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5", Pod:"calico-kube-controllers-79f945c777-w6fsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif32bfabfaeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.076 [INFO][6014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.076 [INFO][6014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" iface="eth0" netns="" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.076 [INFO][6014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.076 [INFO][6014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.113 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.113 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.113 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.129 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.129 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.130 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:43.141227 containerd[1535]: 2025-07-12 00:27:43.133 [INFO][6014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.143709 containerd[1535]: time="2025-07-12T00:27:43.141287086Z" level=info msg="TearDown network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\" successfully" Jul 12 00:27:43.143709 containerd[1535]: time="2025-07-12T00:27:43.141312087Z" level=info msg="StopPodSandbox for \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\" returns successfully" Jul 12 00:27:43.143709 containerd[1535]: time="2025-07-12T00:27:43.141726233Z" level=info msg="RemovePodSandbox for \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\"" Jul 12 00:27:43.143709 containerd[1535]: time="2025-07-12T00:27:43.141757315Z" level=info msg="Forcibly stopping sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\"" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.196 [WARNING][6040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0", GenerateName:"calico-kube-controllers-79f945c777-", Namespace:"calico-system", SelfLink:"", UID:"e5ce9aa9-2aec-4285-9ae0-962553767dc1", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f945c777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc9e41f8cfb6d2e47e6dae22856010ee61655fffd8e5a0579b0e686294f664d5", Pod:"calico-kube-controllers-79f945c777-w6fsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif32bfabfaeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.196 [INFO][6040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.196 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" iface="eth0" netns="" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.196 [INFO][6040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.196 [INFO][6040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.220 [INFO][6048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.220 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.220 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.230 [WARNING][6048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.230 [INFO][6048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" HandleID="k8s-pod-network.128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Workload="localhost-k8s-calico--kube--controllers--79f945c777--w6fsx-eth0" Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.231 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:43.248307 containerd[1535]: 2025-07-12 00:27:43.239 [INFO][6040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059" Jul 12 00:27:43.248307 containerd[1535]: time="2025-07-12T00:27:43.241139473Z" level=info msg="TearDown network for sandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\" successfully" Jul 12 00:27:43.248307 containerd[1535]: time="2025-07-12T00:27:43.244129179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:27:43.248307 containerd[1535]: time="2025-07-12T00:27:43.244214104Z" level=info msg="RemovePodSandbox \"128036cbd02eac908791c7f8e70e2ba85e2d72b83656deb33204f691c1dad059\" returns successfully" Jul 12 00:27:43.507810 sshd[5669]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:43.517606 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:35372.service - OpenSSH per-connection server daemon (10.0.0.1:35372). Jul 12 00:27:43.532639 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:27:43.533993 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:39962.service: Deactivated successfully. Jul 12 00:27:43.538990 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:27:43.539863 systemd-logind[1519]: Removed session 15. Jul 12 00:27:43.619504 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 35372 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:43.620925 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:43.625319 systemd-logind[1519]: New session 16 of user core. Jul 12 00:27:43.632559 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:27:44.178596 sshd[6060]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:44.188746 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:35382.service - OpenSSH per-connection server daemon (10.0.0.1:35382). Jul 12 00:27:44.189347 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:35372.service: Deactivated successfully. Jul 12 00:27:44.193452 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:27:44.195123 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:27:44.198779 systemd-logind[1519]: Removed session 16. Jul 12 00:27:44.228411 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 35382 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:44.229795 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:44.234110 systemd-logind[1519]: New session 17 of user core. Jul 12 00:27:44.242624 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:27:44.417741 sshd[6073]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:44.423591 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:35382.service: Deactivated successfully. Jul 12 00:27:44.425688 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:27:44.426223 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:27:44.427140 systemd-logind[1519]: Removed session 17. Jul 12 00:27:47.562045 systemd[1]: run-containerd-runc-k8s.io-d2ba8620942f95e1ef46c45ee572eda1e914e3ae8c0f4c1b92333420613fde17-runc.9QOUN9.mount: Deactivated successfully. Jul 12 00:27:49.430502 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:35384.service - OpenSSH per-connection server daemon (10.0.0.1:35384). Jul 12 00:27:49.466591 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:49.468036 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:49.473182 systemd-logind[1519]: New session 18 of user core. Jul 12 00:27:49.480562 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:27:49.631536 sshd[6117]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:49.638778 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:35384.service: Deactivated successfully. Jul 12 00:27:49.646142 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:27:49.646454 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:27:49.649617 systemd-logind[1519]: Removed session 18. Jul 12 00:27:54.644713 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:50408.service - OpenSSH per-connection server daemon (10.0.0.1:50408). Jul 12 00:27:54.677311 sshd[6180]: Accepted publickey for core from 10.0.0.1 port 50408 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:54.678756 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:54.683581 systemd-logind[1519]: New session 19 of user core. Jul 12 00:27:54.692526 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:27:54.842730 sshd[6180]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:54.847662 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:50408.service: Deactivated successfully. Jul 12 00:27:54.851215 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:27:54.851775 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:27:54.855655 systemd-logind[1519]: Removed session 19. Jul 12 00:27:59.851650 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:50424.service - OpenSSH per-connection server daemon (10.0.0.1:50424). Jul 12 00:27:59.882623 sshd[6224]: Accepted publickey for core from 10.0.0.1 port 50424 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:27:59.883902 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:27:59.887982 systemd-logind[1519]: New session 20 of user core. Jul 12 00:27:59.895512 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:27:59.906181 kubelet[2607]: I0712 00:27:59.905475 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:28:00.052496 sshd[6224]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:00.056368 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:50424.service: Deactivated successfully. Jul 12 00:28:00.058387 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:28:00.058481 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:28:00.059526 systemd-logind[1519]: Removed session 20. Jul 12 00:28:00.453549 kubelet[2607]: E0712 00:28:00.453516 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"