Sep 11 23:44:40.794705 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 11 23:44:40.794726 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Sep 11 22:16:14 -00 2025 Sep 11 23:44:40.794735 kernel: KASLR enabled Sep 11 23:44:40.794741 kernel: efi: EFI v2.7 by EDK II Sep 11 23:44:40.794746 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 11 23:44:40.794752 kernel: random: crng init done Sep 11 23:44:40.794758 kernel: secureboot: Secure boot disabled Sep 11 23:44:40.794764 kernel: ACPI: Early table checksum verification disabled Sep 11 23:44:40.794770 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 11 23:44:40.794777 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 11 23:44:40.794783 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794789 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794794 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794800 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794807 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794814 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794821 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794827 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794833 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:44:40.794840 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 11 23:44:40.794846 kernel: ACPI: Use ACPI SPCR as default console: No Sep 11 23:44:40.794852 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:44:40.794858 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 11 23:44:40.794864 kernel: Zone ranges: Sep 11 23:44:40.794870 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:44:40.794890 kernel: DMA32 empty Sep 11 23:44:40.794898 kernel: Normal empty Sep 11 23:44:40.794904 kernel: Device empty Sep 11 23:44:40.794910 kernel: Movable zone start for each node Sep 11 23:44:40.794916 kernel: Early memory node ranges Sep 11 23:44:40.794922 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 11 23:44:40.794928 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 11 23:44:40.794934 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 11 23:44:40.794940 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 11 23:44:40.794946 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 11 23:44:40.794952 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 11 23:44:40.794958 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 11 23:44:40.794966 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 11 23:44:40.794972 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 11 23:44:40.794978 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 11 23:44:40.794987 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 11 23:44:40.794993 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 11 23:44:40.795000 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 11 23:44:40.795008 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:44:40.795014 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 11 23:44:40.795021 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 11 23:44:40.795027 kernel: psci: probing for conduit method from ACPI. Sep 11 23:44:40.795034 kernel: psci: PSCIv1.1 detected in firmware. Sep 11 23:44:40.795040 kernel: psci: Using standard PSCI v0.2 function IDs Sep 11 23:44:40.795046 kernel: psci: Trusted OS migration not required Sep 11 23:44:40.795053 kernel: psci: SMC Calling Convention v1.1 Sep 11 23:44:40.795059 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 11 23:44:40.795065 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 11 23:44:40.795073 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 11 23:44:40.795079 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 11 23:44:40.795086 kernel: Detected PIPT I-cache on CPU0 Sep 11 23:44:40.795092 kernel: CPU features: detected: GIC system register CPU interface Sep 11 23:44:40.795098 kernel: CPU features: detected: Spectre-v4 Sep 11 23:44:40.795105 kernel: CPU features: detected: Spectre-BHB Sep 11 23:44:40.795111 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 11 23:44:40.795117 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 11 23:44:40.795124 kernel: CPU features: detected: ARM erratum 1418040 Sep 11 23:44:40.795130 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 11 23:44:40.795136 kernel: alternatives: applying boot alternatives Sep 11 23:44:40.795144 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=34cdae46b43e6281eb14909b07c5254135a938c8cecf4370cc2216c267809c7a Sep 11 23:44:40.795151 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 23:44:40.795158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 23:44:40.795164 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 23:44:40.795171 kernel: Fallback order for Node 0: 0 Sep 11 23:44:40.795177 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 11 23:44:40.795183 kernel: Policy zone: DMA Sep 11 23:44:40.795190 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 23:44:40.795196 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 11 23:44:40.795202 kernel: software IO TLB: area num 4. Sep 11 23:44:40.795209 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 11 23:44:40.795215 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 11 23:44:40.795223 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 23:44:40.795236 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 23:44:40.795243 kernel: rcu: RCU event tracing is enabled. Sep 11 23:44:40.795250 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 23:44:40.795256 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 23:44:40.795263 kernel: Tracing variant of Tasks RCU enabled. Sep 11 23:44:40.795269 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 23:44:40.795276 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 23:44:40.795282 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:44:40.795289 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:44:40.795295 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 11 23:44:40.795304 kernel: GICv3: 256 SPIs implemented Sep 11 23:44:40.795310 kernel: GICv3: 0 Extended SPIs implemented Sep 11 23:44:40.795317 kernel: Root IRQ handler: gic_handle_irq Sep 11 23:44:40.795323 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 11 23:44:40.795329 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 11 23:44:40.795336 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 11 23:44:40.795342 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 11 23:44:40.795348 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 11 23:44:40.795355 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 11 23:44:40.795361 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 11 23:44:40.795368 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 11 23:44:40.795374 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 23:44:40.795382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:44:40.795388 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 11 23:44:40.795395 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 11 23:44:40.795401 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 11 23:44:40.795407 kernel: arm-pv: using stolen time PV Sep 11 23:44:40.795414 kernel: Console: colour dummy device 80x25 Sep 11 23:44:40.795421 kernel: ACPI: Core revision 20240827 Sep 11 23:44:40.795427 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 11 23:44:40.795434 kernel: pid_max: default: 32768 minimum: 301 Sep 11 23:44:40.795440 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 23:44:40.795448 kernel: landlock: Up and running. Sep 11 23:44:40.795455 kernel: SELinux: Initializing. Sep 11 23:44:40.795461 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:44:40.795468 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:44:40.795474 kernel: rcu: Hierarchical SRCU implementation. Sep 11 23:44:40.795481 kernel: rcu: Max phase no-delay instances is 400. Sep 11 23:44:40.795487 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 23:44:40.795494 kernel: Remapping and enabling EFI services. Sep 11 23:44:40.795501 kernel: smp: Bringing up secondary CPUs ... Sep 11 23:44:40.795513 kernel: Detected PIPT I-cache on CPU1 Sep 11 23:44:40.795519 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 11 23:44:40.795526 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 11 23:44:40.795534 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:44:40.795541 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 11 23:44:40.795548 kernel: Detected PIPT I-cache on CPU2 Sep 11 23:44:40.795555 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 11 23:44:40.795562 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 11 23:44:40.795571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:44:40.795577 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 11 23:44:40.795584 kernel: Detected PIPT I-cache on CPU3 Sep 11 23:44:40.795591 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 11 23:44:40.795598 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 11 23:44:40.795605 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:44:40.795612 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 11 23:44:40.795619 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 23:44:40.795626 kernel: SMP: Total of 4 processors activated. Sep 11 23:44:40.795635 kernel: CPU: All CPU(s) started at EL1 Sep 11 23:44:40.795642 kernel: CPU features: detected: 32-bit EL0 Support Sep 11 23:44:40.795649 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 11 23:44:40.795656 kernel: CPU features: detected: Common not Private translations Sep 11 23:44:40.795662 kernel: CPU features: detected: CRC32 instructions Sep 11 23:44:40.795669 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 11 23:44:40.795676 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 11 23:44:40.795683 kernel: CPU features: detected: LSE atomic instructions Sep 11 23:44:40.795690 kernel: CPU features: detected: Privileged Access Never Sep 11 23:44:40.795697 kernel: CPU features: detected: RAS Extension Support Sep 11 23:44:40.795705 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 11 23:44:40.795712 kernel: alternatives: applying system-wide alternatives Sep 11 23:44:40.795719 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 11 23:44:40.795727 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 11 23:44:40.795733 kernel: devtmpfs: initialized Sep 11 23:44:40.795741 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 23:44:40.795748 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 23:44:40.795755 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 11 23:44:40.795763 kernel: 0 pages in range for non-PLT usage Sep 11 23:44:40.795770 kernel: 508576 pages in range for PLT usage Sep 11 23:44:40.795776 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 23:44:40.795783 kernel: SMBIOS 3.0.0 present. Sep 11 23:44:40.795790 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 11 23:44:40.795797 kernel: DMI: Memory slots populated: 1/1 Sep 11 23:44:40.795804 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 23:44:40.795811 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 11 23:44:40.795818 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 11 23:44:40.795826 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 11 23:44:40.795833 kernel: audit: initializing netlink subsys (disabled) Sep 11 23:44:40.795840 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 11 23:44:40.795847 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 23:44:40.795854 kernel: cpuidle: using governor menu Sep 11 23:44:40.795860 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 11 23:44:40.795867 kernel: ASID allocator initialised with 32768 entries Sep 11 23:44:40.795874 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 23:44:40.795912 kernel: Serial: AMBA PL011 UART driver Sep 11 23:44:40.795921 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 23:44:40.795928 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 23:44:40.795935 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 11 23:44:40.795942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 11 23:44:40.795949 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 23:44:40.795956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 23:44:40.795963 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 11 23:44:40.795970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 11 23:44:40.795976 kernel: ACPI: Added _OSI(Module Device) Sep 11 23:44:40.795984 kernel: ACPI: Added _OSI(Processor Device) Sep 11 23:44:40.795991 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 23:44:40.795998 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 23:44:40.796005 kernel: ACPI: Interpreter enabled Sep 11 23:44:40.796012 kernel: ACPI: Using GIC for interrupt routing Sep 11 23:44:40.796019 kernel: ACPI: MCFG table detected, 1 entries Sep 11 23:44:40.796026 kernel: ACPI: CPU0 has been hot-added Sep 11 23:44:40.796032 kernel: ACPI: CPU1 has been hot-added Sep 11 23:44:40.796039 kernel: ACPI: CPU2 has been hot-added Sep 11 23:44:40.796046 kernel: ACPI: CPU3 has been hot-added Sep 11 23:44:40.796054 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 11 23:44:40.796061 kernel: printk: legacy console [ttyAMA0] enabled Sep 11 23:44:40.796068 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 23:44:40.796209 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 23:44:40.796284 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 11 23:44:40.796344 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 11 23:44:40.796400 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 11 23:44:40.796471 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 11 23:44:40.796481 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 11 23:44:40.796488 kernel: PCI host bridge to bus 0000:00 Sep 11 23:44:40.796553 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 11 23:44:40.796617 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 11 23:44:40.796673 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 11 23:44:40.796726 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 23:44:40.796821 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 11 23:44:40.796909 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 23:44:40.796972 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 11 23:44:40.797031 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 11 23:44:40.797088 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 11 23:44:40.797184 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 11 23:44:40.797459 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 11 23:44:40.797549 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 11 23:44:40.797608 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 11 23:44:40.797660 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 11 23:44:40.797765 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 11 23:44:40.797776 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 11 23:44:40.797783 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 11 23:44:40.797790 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 11 23:44:40.797800 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 11 23:44:40.797808 kernel: iommu: Default domain type: Translated Sep 11 23:44:40.797815 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 11 23:44:40.797825 kernel: efivars: Registered efivars operations Sep 11 23:44:40.797834 kernel: vgaarb: loaded Sep 11 23:44:40.797843 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 11 23:44:40.797854 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 23:44:40.797862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 23:44:40.797873 kernel: pnp: PnP ACPI init Sep 11 23:44:40.798023 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 11 23:44:40.798036 kernel: pnp: PnP ACPI: found 1 devices Sep 11 23:44:40.798043 kernel: NET: Registered PF_INET protocol family Sep 11 23:44:40.798050 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 23:44:40.798057 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 23:44:40.798064 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 23:44:40.798072 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 23:44:40.798079 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 23:44:40.798088 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 23:44:40.798096 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:44:40.798103 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:44:40.798110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 23:44:40.798117 kernel: PCI: CLS 0 bytes, default 64 Sep 11 23:44:40.798124 kernel: kvm [1]: HYP mode not available Sep 11 23:44:40.798130 kernel: Initialise system trusted keyrings Sep 11 23:44:40.798138 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 23:44:40.798144 kernel: Key type asymmetric registered Sep 11 23:44:40.798153 kernel: Asymmetric key parser 'x509' registered Sep 11 23:44:40.798160 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 11 23:44:40.798167 kernel: io scheduler mq-deadline registered Sep 11 23:44:40.798174 kernel: io scheduler kyber registered Sep 11 23:44:40.798181 kernel: io scheduler bfq registered Sep 11 23:44:40.798188 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 11 23:44:40.798195 kernel: ACPI: button: Power Button [PWRB] Sep 11 23:44:40.798202 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 11 23:44:40.798282 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 11 23:44:40.798295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 23:44:40.798302 kernel: thunder_xcv, ver 1.0 Sep 11 23:44:40.798309 kernel: thunder_bgx, ver 1.0 Sep 11 23:44:40.798317 kernel: nicpf, ver 1.0 Sep 11 23:44:40.798324 kernel: nicvf, ver 1.0 Sep 11 23:44:40.798394 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 11 23:44:40.798451 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-11T23:44:40 UTC (1757634280) Sep 11 23:44:40.798460 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 23:44:40.798467 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 11 23:44:40.798476 kernel: watchdog: NMI not fully supported Sep 11 23:44:40.798483 kernel: watchdog: Hard watchdog permanently disabled Sep 11 23:44:40.798490 kernel: NET: Registered PF_INET6 protocol family Sep 11 23:44:40.798497 kernel: Segment Routing with IPv6 Sep 11 23:44:40.798504 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 23:44:40.798512 kernel: NET: Registered PF_PACKET protocol family Sep 11 23:44:40.798519 kernel: Key type dns_resolver registered Sep 11 23:44:40.798526 kernel: registered taskstats version 1 Sep 11 23:44:40.798533 kernel: Loading compiled-in X.509 certificates Sep 11 23:44:40.798542 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: c76a2532dfc607285c10ef525f008171185de1e8' Sep 11 23:44:40.798549 kernel: Demotion targets for Node 0: null Sep 11 23:44:40.798556 kernel: Key type .fscrypt registered Sep 11 23:44:40.798562 kernel: Key type fscrypt-provisioning registered Sep 11 23:44:40.798569 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 23:44:40.798576 kernel: ima: Allocated hash algorithm: sha1 Sep 11 23:44:40.798585 kernel: ima: No architecture policies found Sep 11 23:44:40.798596 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 11 23:44:40.798606 kernel: clk: Disabling unused clocks Sep 11 23:44:40.798613 kernel: PM: genpd: Disabling unused power domains Sep 11 23:44:40.798620 kernel: Warning: unable to open an initial console. Sep 11 23:44:40.798627 kernel: Freeing unused kernel memory: 38912K Sep 11 23:44:40.798634 kernel: Run /init as init process Sep 11 23:44:40.798641 kernel: with arguments: Sep 11 23:44:40.798648 kernel: /init Sep 11 23:44:40.798663 kernel: with environment: Sep 11 23:44:40.798675 kernel: HOME=/ Sep 11 23:44:40.798682 kernel: TERM=linux Sep 11 23:44:40.798691 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 23:44:40.798699 systemd[1]: Successfully made /usr/ read-only. Sep 11 23:44:40.798710 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:44:40.798718 systemd[1]: Detected virtualization kvm. Sep 11 23:44:40.798725 systemd[1]: Detected architecture arm64. Sep 11 23:44:40.798732 systemd[1]: Running in initrd. Sep 11 23:44:40.798740 systemd[1]: No hostname configured, using default hostname. Sep 11 23:44:40.798749 systemd[1]: Hostname set to . Sep 11 23:44:40.798757 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:44:40.798764 systemd[1]: Queued start job for default target initrd.target. Sep 11 23:44:40.798771 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:44:40.798779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:44:40.798787 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 23:44:40.798794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:44:40.798802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 23:44:40.798813 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 23:44:40.798822 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 23:44:40.798830 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 23:44:40.798837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:44:40.798845 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:44:40.798852 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:44:40.798860 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:44:40.798868 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:44:40.798876 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:44:40.798895 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:44:40.798903 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:44:40.798911 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 23:44:40.798918 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 23:44:40.798926 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:44:40.798933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:44:40.798942 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:44:40.798950 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:44:40.798957 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 23:44:40.798965 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:44:40.798972 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 23:44:40.798980 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 23:44:40.798988 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 23:44:40.798996 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:44:40.799003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:44:40.799012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:44:40.799020 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 23:44:40.799028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:44:40.799035 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 23:44:40.799044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 23:44:40.799069 systemd-journald[245]: Collecting audit messages is disabled. Sep 11 23:44:40.799088 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 23:44:40.799096 kernel: Bridge firewalling registered Sep 11 23:44:40.799105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:44:40.799114 systemd-journald[245]: Journal started Sep 11 23:44:40.799131 systemd-journald[245]: Runtime Journal (/run/log/journal/fe1fff2c31b5459db496848e067bcd7e) is 6M, max 48.5M, 42.4M free. Sep 11 23:44:40.783109 systemd-modules-load[246]: Inserted module 'overlay' Sep 11 23:44:40.798549 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 11 23:44:40.804692 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:44:40.806901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:44:40.808275 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:44:40.813073 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 23:44:40.815011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:44:40.817378 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:44:40.827462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:44:40.833946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:44:40.837430 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:44:40.837490 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 23:44:40.842492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:44:40.844061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:44:40.849592 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 23:44:40.852506 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:44:40.870358 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=34cdae46b43e6281eb14909b07c5254135a938c8cecf4370cc2216c267809c7a Sep 11 23:44:40.885161 systemd-resolved[289]: Positive Trust Anchors: Sep 11 23:44:40.885180 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:44:40.885210 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:44:40.890100 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 11 23:44:40.891051 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:44:40.896026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:44:40.946920 kernel: SCSI subsystem initialized Sep 11 23:44:40.951899 kernel: Loading iSCSI transport class v2.0-870. Sep 11 23:44:40.958914 kernel: iscsi: registered transport (tcp) Sep 11 23:44:40.971906 kernel: iscsi: registered transport (qla4xxx) Sep 11 23:44:40.971924 kernel: QLogic iSCSI HBA Driver Sep 11 23:44:40.989075 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:44:41.007421 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:44:41.010181 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:44:41.055291 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 23:44:41.057680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 23:44:41.124918 kernel: raid6: neonx8 gen() 15703 MB/s Sep 11 23:44:41.141930 kernel: raid6: neonx4 gen() 15707 MB/s Sep 11 23:44:41.158915 kernel: raid6: neonx2 gen() 7998 MB/s Sep 11 23:44:41.175912 kernel: raid6: neonx1 gen() 9583 MB/s Sep 11 23:44:41.192911 kernel: raid6: int64x8 gen() 6634 MB/s Sep 11 23:44:41.209930 kernel: raid6: int64x4 gen() 7192 MB/s Sep 11 23:44:41.226916 kernel: raid6: int64x2 gen() 5960 MB/s Sep 11 23:44:41.244200 kernel: raid6: int64x1 gen() 4930 MB/s Sep 11 23:44:41.244226 kernel: raid6: using algorithm neonx4 gen() 15707 MB/s Sep 11 23:44:41.262188 kernel: raid6: .... xor() 12059 MB/s, rmw enabled Sep 11 23:44:41.262248 kernel: raid6: using neon recovery algorithm Sep 11 23:44:41.268186 kernel: xor: measuring software checksum speed Sep 11 23:44:41.268224 kernel: 8regs : 21590 MB/sec Sep 11 23:44:41.268932 kernel: 32regs : 20500 MB/sec Sep 11 23:44:41.270367 kernel: arm64_neon : 27851 MB/sec Sep 11 23:44:41.270380 kernel: xor: using function: arm64_neon (27851 MB/sec) Sep 11 23:44:41.322918 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 23:44:41.328801 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:44:41.332404 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:44:41.364606 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 11 23:44:41.368675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:44:41.370729 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 23:44:41.397151 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 11 23:44:41.419250 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:44:41.421906 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:44:41.473598 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:44:41.477143 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 23:44:41.524912 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 11 23:44:41.525141 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 23:44:41.528919 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 23:44:41.528959 kernel: GPT:9289727 != 19775487 Sep 11 23:44:41.528969 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 23:44:41.534953 kernel: GPT:9289727 != 19775487 Sep 11 23:44:41.535905 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 23:44:41.537901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:44:41.545270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:44:41.545388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:44:41.549665 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:44:41.556127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:44:41.570836 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 23:44:41.579124 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 23:44:41.580704 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 23:44:41.582940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:44:41.601810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:44:41.608159 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 23:44:41.609494 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 23:44:41.612973 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:44:41.615353 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:44:41.617659 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:44:41.620678 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 23:44:41.622708 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 23:44:41.638615 disk-uuid[593]: Primary Header is updated. Sep 11 23:44:41.638615 disk-uuid[593]: Secondary Entries is updated. Sep 11 23:44:41.638615 disk-uuid[593]: Secondary Header is updated. Sep 11 23:44:41.642916 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:44:41.645207 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:44:42.649911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:44:42.650334 disk-uuid[596]: The operation has completed successfully. Sep 11 23:44:42.674240 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 23:44:42.674342 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 23:44:42.698655 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 23:44:42.720985 sh[612]: Success Sep 11 23:44:42.733910 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 23:44:42.733947 kernel: device-mapper: uevent: version 1.0.3 Sep 11 23:44:42.733965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 23:44:42.741925 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 11 23:44:42.769502 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 23:44:42.772378 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 23:44:42.787253 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 23:44:42.791927 kernel: BTRFS: device fsid 070f11bc-6881-4580-bbfd-8e1bd2605f24 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Sep 11 23:44:42.791963 kernel: BTRFS info (device dm-0): first mount of filesystem 070f11bc-6881-4580-bbfd-8e1bd2605f24 Sep 11 23:44:42.794093 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:44:42.798042 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 23:44:42.798063 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 23:44:42.799115 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 23:44:42.800465 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:44:42.801962 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 23:44:42.802731 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 23:44:42.804492 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 23:44:42.833003 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Sep 11 23:44:42.833053 kernel: BTRFS info (device vda6): first mount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:44:42.833063 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:44:42.836488 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:44:42.836524 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:44:42.840921 kernel: BTRFS info (device vda6): last unmount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:44:42.842332 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 23:44:42.846468 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 23:44:42.917926 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:44:42.921194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:44:42.953463 ignition[707]: Ignition 2.21.0 Sep 11 23:44:42.953480 ignition[707]: Stage: fetch-offline Sep 11 23:44:42.953509 ignition[707]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:44:42.953517 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:44:42.953682 ignition[707]: parsed url from cmdline: "" Sep 11 23:44:42.953684 ignition[707]: no config URL provided Sep 11 23:44:42.953689 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 23:44:42.953696 ignition[707]: no config at "/usr/lib/ignition/user.ign" Sep 11 23:44:42.953715 ignition[707]: op(1): [started] loading QEMU firmware config module Sep 11 23:44:42.953719 ignition[707]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 23:44:42.963680 ignition[707]: op(1): [finished] loading QEMU firmware config module Sep 11 23:44:42.964346 systemd-networkd[802]: lo: Link UP Sep 11 23:44:42.964349 systemd-networkd[802]: lo: Gained carrier Sep 11 23:44:42.965140 systemd-networkd[802]: Enumeration completed Sep 11 23:44:42.965545 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:44:42.965549 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:44:42.966297 systemd-networkd[802]: eth0: Link UP Sep 11 23:44:42.966439 systemd-networkd[802]: eth0: Gained carrier Sep 11 23:44:42.966447 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:44:42.967324 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:44:42.968940 systemd[1]: Reached target network.target - Network. Sep 11 23:44:42.983929 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:44:43.017672 ignition[707]: parsing config with SHA512: 2b6c5d23ec446e4e477d101b635390d037ee8d72a31b3772fd33318497efbb62eca849a0c4e5ffb5157b3568f97be89006bbd37aae909b6d021c81a4804c204f Sep 11 23:44:43.021831 unknown[707]: fetched base config from "system" Sep 11 23:44:43.021844 unknown[707]: fetched user config from "qemu" Sep 11 23:44:43.022286 ignition[707]: fetch-offline: fetch-offline passed Sep 11 23:44:43.024430 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:44:43.022351 ignition[707]: Ignition finished successfully Sep 11 23:44:43.025800 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 23:44:43.026613 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 23:44:43.061588 ignition[810]: Ignition 2.21.0 Sep 11 23:44:43.061608 ignition[810]: Stage: kargs Sep 11 23:44:43.061735 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:44:43.061744 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:44:43.063201 ignition[810]: kargs: kargs passed Sep 11 23:44:43.066256 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 23:44:43.063278 ignition[810]: Ignition finished successfully Sep 11 23:44:43.068264 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 23:44:43.094780 ignition[818]: Ignition 2.21.0 Sep 11 23:44:43.094799 ignition[818]: Stage: disks Sep 11 23:44:43.094963 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:44:43.094973 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:44:43.098028 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 23:44:43.096380 ignition[818]: disks: disks passed Sep 11 23:44:43.099233 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 23:44:43.096455 ignition[818]: Ignition finished successfully Sep 11 23:44:43.101113 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 23:44:43.103142 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:44:43.104638 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:44:43.106702 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:44:43.109287 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 23:44:43.138621 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 23:44:43.143288 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 23:44:43.145531 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 23:44:43.208914 kernel: EXT4-fs (vda9): mounted filesystem 358f7642-1e9a-4460-bcb4-1ef3d420e352 r/w with ordered data mode. Quota mode: none. Sep 11 23:44:43.208987 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 23:44:43.210316 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 23:44:43.213850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:44:43.216250 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 23:44:43.217307 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 23:44:43.217361 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 23:44:43.217386 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:44:43.231521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 23:44:43.233781 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 23:44:43.238914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 11 23:44:43.238943 kernel: BTRFS info (device vda6): first mount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:44:43.239978 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:44:43.243674 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:44:43.243721 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:44:43.247045 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:44:43.269797 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 23:44:43.273915 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 11 23:44:43.277839 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 23:44:43.280499 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 23:44:43.347744 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 23:44:43.350178 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 23:44:43.351827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 23:44:43.368906 kernel: BTRFS info (device vda6): last unmount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:44:43.389024 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 23:44:43.402446 ignition[949]: INFO : Ignition 2.21.0 Sep 11 23:44:43.402446 ignition[949]: INFO : Stage: mount Sep 11 23:44:43.404632 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:44:43.404632 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:44:43.406802 ignition[949]: INFO : mount: mount passed Sep 11 23:44:43.406802 ignition[949]: INFO : Ignition finished successfully Sep 11 23:44:43.406505 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 23:44:43.408709 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 23:44:43.791131 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 23:44:43.792610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:44:43.822730 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Sep 11 23:44:43.822771 kernel: BTRFS info (device vda6): first mount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:44:43.822781 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:44:43.826340 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:44:43.826376 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:44:43.827766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:44:43.853325 ignition[979]: INFO : Ignition 2.21.0 Sep 11 23:44:43.853325 ignition[979]: INFO : Stage: files Sep 11 23:44:43.853325 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:44:43.853325 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:44:43.858195 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Sep 11 23:44:43.858195 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 23:44:43.858195 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 23:44:43.862147 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 23:44:43.863537 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 23:44:43.863537 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 23:44:43.862712 unknown[979]: wrote ssh authorized keys file for user: core Sep 11 23:44:43.867383 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 11 23:44:43.867383 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 11 23:44:43.921007 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 23:44:44.337616 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 11 23:44:44.337616 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:44:44.341952 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 23:44:44.360956 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 23:44:44.360956 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 23:44:44.360956 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 11 23:44:44.858998 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 23:44:44.950028 systemd-networkd[802]: eth0: Gained IPv6LL Sep 11 23:44:45.367770 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 23:44:45.367770 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 23:44:45.372027 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 23:44:45.387086 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:44:45.389098 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:44:45.391782 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 23:44:45.391782 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 23:44:45.391782 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 23:44:45.391782 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:44:45.391782 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:44:45.391782 ignition[979]: INFO : files: files passed Sep 11 23:44:45.391782 ignition[979]: INFO : Ignition finished successfully Sep 11 23:44:45.393449 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 23:44:45.396528 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 23:44:45.398703 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 23:44:45.414631 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 23:44:45.414733 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 23:44:45.416800 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 23:44:45.420017 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:44:45.420017 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:44:45.423151 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:44:45.422735 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:44:45.424628 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 23:44:45.426564 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 23:44:45.456621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 23:44:45.457894 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 23:44:45.460250 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 23:44:45.461300 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 23:44:45.463125 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 23:44:45.463950 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 23:44:45.478242 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:44:45.480785 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 23:44:45.505011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:44:45.507324 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:44:45.508618 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 23:44:45.510500 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 23:44:45.510626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:44:45.513170 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 23:44:45.515156 systemd[1]: Stopped target basic.target - Basic System. Sep 11 23:44:45.516852 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 23:44:45.518679 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:44:45.520712 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 23:44:45.522786 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:44:45.524953 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 23:44:45.527036 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:44:45.529194 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 23:44:45.531306 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 23:44:45.533162 systemd[1]: Stopped target swap.target - Swaps. Sep 11 23:44:45.534770 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 23:44:45.534916 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:44:45.537496 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:44:45.539647 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:44:45.541795 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 23:44:45.541876 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:44:45.544192 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 23:44:45.544327 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 23:44:45.547388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 23:44:45.547502 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:44:45.549652 systemd[1]: Stopped target paths.target - Path Units. Sep 11 23:44:45.551391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 23:44:45.554923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:44:45.557100 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 23:44:45.559349 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 23:44:45.561011 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 23:44:45.561099 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:44:45.562831 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 23:44:45.562918 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:44:45.564652 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 23:44:45.564774 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:44:45.566749 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 23:44:45.566850 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 23:44:45.569333 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 23:44:45.571000 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 23:44:45.572325 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 23:44:45.572437 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:44:45.574773 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 23:44:45.574874 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:44:45.580290 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 23:44:45.594060 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 23:44:45.602637 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 23:44:45.606771 ignition[1035]: INFO : Ignition 2.21.0 Sep 11 23:44:45.606771 ignition[1035]: INFO : Stage: umount Sep 11 23:44:45.608598 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:44:45.608598 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:44:45.610981 ignition[1035]: INFO : umount: umount passed Sep 11 23:44:45.610981 ignition[1035]: INFO : Ignition finished successfully Sep 11 23:44:45.611287 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 23:44:45.611401 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 23:44:45.613062 systemd[1]: Stopped target network.target - Network. Sep 11 23:44:45.614688 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 23:44:45.614750 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 23:44:45.616640 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 23:44:45.616685 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 23:44:45.618535 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 23:44:45.618583 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 23:44:45.620270 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 23:44:45.620311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 23:44:45.622243 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 23:44:45.624202 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 23:44:45.628537 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 23:44:45.628646 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 23:44:45.631978 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 23:44:45.632257 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 23:44:45.632294 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:44:45.638517 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 23:44:45.639367 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 23:44:45.639465 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 23:44:45.644094 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 23:44:45.644250 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 23:44:45.645692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 23:44:45.645726 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:44:45.648984 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 23:44:45.650039 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 23:44:45.650104 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:44:45.652470 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 23:44:45.652514 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:44:45.655506 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 23:44:45.655549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 23:44:45.657704 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:44:45.662501 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 23:44:45.662792 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 23:44:45.662861 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 23:44:45.665759 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 23:44:45.665833 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 23:44:45.676502 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 23:44:45.676655 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:44:45.678914 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 23:44:45.678950 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 23:44:45.680995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 23:44:45.681026 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:44:45.682868 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 23:44:45.682921 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:44:45.685759 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 23:44:45.685808 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 23:44:45.688525 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 23:44:45.688578 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:44:45.692176 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 23:44:45.693378 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 23:44:45.693436 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:44:45.696572 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 23:44:45.696617 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:44:45.699967 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 23:44:45.700011 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:44:45.703438 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 23:44:45.703482 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:44:45.705926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:44:45.705993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:44:45.709843 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 23:44:45.709976 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 23:44:45.711509 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 23:44:45.711618 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 23:44:45.714253 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 23:44:45.716056 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 23:44:45.735913 systemd[1]: Switching root. Sep 11 23:44:45.764112 systemd-journald[245]: Journal stopped Sep 11 23:44:46.531760 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 11 23:44:46.531814 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 23:44:46.531835 kernel: SELinux: policy capability open_perms=1 Sep 11 23:44:46.531855 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 23:44:46.531867 kernel: SELinux: policy capability always_check_network=0 Sep 11 23:44:46.531898 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 23:44:46.531912 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 23:44:46.531921 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 23:44:46.531930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 23:44:46.531939 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 23:44:46.531948 kernel: audit: type=1403 audit(1757634285.930:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 23:44:46.531959 systemd[1]: Successfully loaded SELinux policy in 46.648ms. Sep 11 23:44:46.531974 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.206ms. Sep 11 23:44:46.531987 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:44:46.532005 systemd[1]: Detected virtualization kvm. Sep 11 23:44:46.532024 systemd[1]: Detected architecture arm64. Sep 11 23:44:46.532034 systemd[1]: Detected first boot. Sep 11 23:44:46.532047 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:44:46.532057 zram_generator::config[1080]: No configuration found. Sep 11 23:44:46.532068 kernel: NET: Registered PF_VSOCK protocol family Sep 11 23:44:46.532077 systemd[1]: Populated /etc with preset unit settings. Sep 11 23:44:46.532088 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 23:44:46.532100 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 23:44:46.532110 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 23:44:46.532120 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 23:44:46.532131 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 23:44:46.532141 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 23:44:46.532151 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 23:44:46.532160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 23:44:46.532171 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 23:44:46.532186 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 23:44:46.532196 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 23:44:46.532206 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 23:44:46.532224 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:44:46.532236 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:44:46.532246 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 23:44:46.532257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 23:44:46.532267 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 23:44:46.532277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:44:46.532289 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 11 23:44:46.532300 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:44:46.532310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:44:46.532322 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 23:44:46.532332 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 23:44:46.532342 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 23:44:46.532352 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 23:44:46.532361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:44:46.532372 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:44:46.532383 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:44:46.532393 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:44:46.532403 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 23:44:46.532413 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 23:44:46.532423 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 23:44:46.532433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:44:46.532443 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:44:46.532453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:44:46.532464 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 23:44:46.532477 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 23:44:46.532487 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 23:44:46.532498 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 23:44:46.532508 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 23:44:46.532518 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 23:44:46.532528 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 23:44:46.532538 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 23:44:46.532548 systemd[1]: Reached target machines.target - Containers. Sep 11 23:44:46.532559 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 23:44:46.532569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:44:46.532580 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:44:46.532590 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 23:44:46.532600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:44:46.532609 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:44:46.532619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:44:46.532629 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 23:44:46.532641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:44:46.532651 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 23:44:46.532661 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 23:44:46.532671 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 23:44:46.532681 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 23:44:46.532692 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 23:44:46.532702 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:44:46.532712 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:44:46.532722 kernel: loop: module loaded Sep 11 23:44:46.532733 kernel: ACPI: bus type drm_connector registered Sep 11 23:44:46.532742 kernel: fuse: init (API version 7.41) Sep 11 23:44:46.532752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:44:46.532762 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:44:46.532772 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 23:44:46.532783 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 23:44:46.532793 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:44:46.532804 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 23:44:46.532814 systemd[1]: Stopped verity-setup.service. Sep 11 23:44:46.532824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 23:44:46.532834 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 23:44:46.532844 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 23:44:46.532854 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 23:44:46.532865 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 23:44:46.532915 systemd-journald[1152]: Collecting audit messages is disabled. Sep 11 23:44:46.532940 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 23:44:46.532952 systemd-journald[1152]: Journal started Sep 11 23:44:46.532975 systemd-journald[1152]: Runtime Journal (/run/log/journal/fe1fff2c31b5459db496848e067bcd7e) is 6M, max 48.5M, 42.4M free. Sep 11 23:44:46.278512 systemd[1]: Queued start job for default target multi-user.target. Sep 11 23:44:46.305935 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 23:44:46.306354 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 23:44:46.535773 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:44:46.537901 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 23:44:46.539354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:44:46.542058 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 23:44:46.542308 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 23:44:46.543799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:44:46.544022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:44:46.545439 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:44:46.545598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:44:46.547066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:44:46.547242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:44:46.550298 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 23:44:46.550460 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 23:44:46.551837 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:44:46.552136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:44:46.554349 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:44:46.555785 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:44:46.558937 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 23:44:46.560591 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 23:44:46.573925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:44:46.575801 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:44:46.578359 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 23:44:46.580532 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 23:44:46.581743 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 23:44:46.581780 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:44:46.583693 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 23:44:46.602743 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 23:44:46.604033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:44:46.605368 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 23:44:46.607826 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 23:44:46.609111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:44:46.612025 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 23:44:46.613203 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:44:46.614116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:44:46.615823 systemd-journald[1152]: Time spent on flushing to /var/log/journal/fe1fff2c31b5459db496848e067bcd7e is 15.114ms for 886 entries. Sep 11 23:44:46.615823 systemd-journald[1152]: System Journal (/var/log/journal/fe1fff2c31b5459db496848e067bcd7e) is 8M, max 195.6M, 187.6M free. Sep 11 23:44:46.642631 systemd-journald[1152]: Received client request to flush runtime journal. Sep 11 23:44:46.617126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 23:44:46.619864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 23:44:46.624999 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 23:44:46.627753 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 23:44:46.646922 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 23:44:46.650923 kernel: loop0: detected capacity change from 0 to 119320 Sep 11 23:44:46.651318 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 23:44:46.653692 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 23:44:46.659119 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 23:44:46.660856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:44:46.665107 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 23:44:46.669226 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 11 23:44:46.669244 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 11 23:44:46.674422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:44:46.680333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 23:44:46.684913 kernel: loop1: detected capacity change from 0 to 100600 Sep 11 23:44:46.694356 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 23:44:46.712929 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 23:44:46.715804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:44:46.720243 kernel: loop2: detected capacity change from 0 to 207008 Sep 11 23:44:46.741414 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 11 23:44:46.741432 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 11 23:44:46.742939 kernel: loop3: detected capacity change from 0 to 119320 Sep 11 23:44:46.747108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:44:46.750904 kernel: loop4: detected capacity change from 0 to 100600 Sep 11 23:44:46.757902 kernel: loop5: detected capacity change from 0 to 207008 Sep 11 23:44:46.762534 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 23:44:46.763008 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 11 23:44:46.766330 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 23:44:46.766349 systemd[1]: Reloading... Sep 11 23:44:46.831924 zram_generator::config[1253]: No configuration found. Sep 11 23:44:46.875695 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 23:44:46.967831 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 23:44:46.968288 systemd[1]: Reloading finished in 201 ms. Sep 11 23:44:46.993671 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 23:44:46.995953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 23:44:47.006191 systemd[1]: Starting ensure-sysext.service... Sep 11 23:44:47.008303 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:44:47.017327 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Sep 11 23:44:47.017363 systemd[1]: Reloading... Sep 11 23:44:47.025121 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 23:44:47.025155 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 23:44:47.025385 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 23:44:47.025566 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 23:44:47.026176 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 23:44:47.026387 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 11 23:44:47.026435 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 11 23:44:47.031038 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:44:47.031051 systemd-tmpfiles[1286]: Skipping /boot Sep 11 23:44:47.040991 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:44:47.041006 systemd-tmpfiles[1286]: Skipping /boot Sep 11 23:44:47.070970 zram_generator::config[1313]: No configuration found. Sep 11 23:44:47.204609 systemd[1]: Reloading finished in 186 ms. Sep 11 23:44:47.230392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:44:47.246925 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:44:47.252805 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 23:44:47.255115 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 23:44:47.258009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:44:47.262081 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 23:44:47.269403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:44:47.270508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:44:47.272812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:44:47.278682 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:44:47.279936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:44:47.280058 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:44:47.281895 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 23:44:47.285340 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:44:47.286945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:44:47.288668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:44:47.288802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:44:47.290775 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:44:47.290960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:44:47.298772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 23:44:47.307901 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 23:44:47.313767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:44:47.315012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:44:47.317390 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:44:47.321124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:44:47.330232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:44:47.331729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:44:47.331854 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:44:47.333035 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 23:44:47.336324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:44:47.336520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:44:47.340515 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:44:47.340678 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:44:47.343273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:44:47.343457 augenrules[1390]: No rules Sep 11 23:44:47.348069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:44:47.350156 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:44:47.351925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:44:47.353451 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:44:47.353597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:44:47.357042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 23:44:47.360914 systemd[1]: Finished ensure-sysext.service. Sep 11 23:44:47.366232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 23:44:47.370112 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:44:47.370184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:44:47.371976 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 23:44:47.374586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:44:47.376898 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 23:44:47.379747 systemd-resolved[1352]: Positive Trust Anchors: Sep 11 23:44:47.379769 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:44:47.379800 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:44:47.379954 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 23:44:47.387358 systemd-resolved[1352]: Defaulting to hostname 'linux'. Sep 11 23:44:47.388662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:44:47.390496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:44:47.396682 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 23:44:47.408189 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Sep 11 23:44:47.423948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:44:47.427747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:44:47.429874 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 23:44:47.431913 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:44:47.434139 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 23:44:47.435565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 23:44:47.438041 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 23:44:47.439314 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 23:44:47.439352 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:44:47.441025 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 23:44:47.442231 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 23:44:47.444115 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 23:44:47.445394 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:44:47.448856 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 23:44:47.451399 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 23:44:47.455970 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 23:44:47.459183 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 23:44:47.461969 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 23:44:47.465220 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 23:44:47.467471 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 23:44:47.470611 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 23:44:47.478305 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:44:47.479610 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:44:47.480656 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:44:47.480684 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:44:47.487216 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 23:44:47.492904 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 23:44:47.496302 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 23:44:47.503323 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 23:44:47.504663 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 23:44:47.507481 jq[1445]: false Sep 11 23:44:47.508046 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 23:44:47.510915 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 23:44:47.515001 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 23:44:47.517619 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 23:44:47.522945 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 23:44:47.524853 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 23:44:47.527143 extend-filesystems[1446]: Found /dev/vda6 Sep 11 23:44:47.525305 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 23:44:47.525846 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 23:44:47.528933 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 23:44:47.538056 jq[1460]: true Sep 11 23:44:47.538279 extend-filesystems[1446]: Found /dev/vda9 Sep 11 23:44:47.543119 extend-filesystems[1446]: Checking size of /dev/vda9 Sep 11 23:44:47.544060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 23:44:47.545594 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 23:44:47.545800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 23:44:47.548276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 23:44:47.548582 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 23:44:47.551261 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 23:44:47.551442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 23:44:47.559698 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 11 23:44:47.560090 update_engine[1458]: I20250911 23:44:47.554431 1458 main.cc:92] Flatcar Update Engine starting Sep 11 23:44:47.561909 jq[1473]: true Sep 11 23:44:47.564641 tar[1472]: linux-arm64/LICENSE Sep 11 23:44:47.564872 tar[1472]: linux-arm64/helm Sep 11 23:44:47.582508 dbus-daemon[1440]: [system] SELinux support is enabled Sep 11 23:44:47.582669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 23:44:47.586982 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 23:44:47.587011 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 23:44:47.594032 extend-filesystems[1446]: Resized partition /dev/vda9 Sep 11 23:44:47.596014 update_engine[1458]: I20250911 23:44:47.589031 1458 update_check_scheduler.cc:74] Next update check in 4m48s Sep 11 23:44:47.588357 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 23:44:47.596234 extend-filesystems[1495]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 23:44:47.602957 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 23:44:47.588372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 23:44:47.591068 systemd[1]: Started update-engine.service - Update Engine. Sep 11 23:44:47.596137 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 23:44:47.618557 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:44:47.622561 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 23:44:47.627903 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 23:44:47.627896 systemd-logind[1456]: New seat seat0. Sep 11 23:44:47.630673 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 23:44:47.631712 systemd-networkd[1413]: lo: Link UP Sep 11 23:44:47.631716 systemd-networkd[1413]: lo: Gained carrier Sep 11 23:44:47.633254 systemd-networkd[1413]: Enumeration completed Sep 11 23:44:47.633428 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:44:47.634079 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:44:47.634083 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:44:47.635070 systemd-networkd[1413]: eth0: Link UP Sep 11 23:44:47.635219 systemd-networkd[1413]: eth0: Gained carrier Sep 11 23:44:47.635235 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:44:47.636221 systemd[1]: Reached target network.target - Network. Sep 11 23:44:47.638781 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 23:44:47.642141 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 23:44:47.648915 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 23:44:47.648915 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 23:44:47.648915 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 23:44:47.674292 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Sep 11 23:44:47.649108 systemd-networkd[1413]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:44:47.677379 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Sep 11 23:44:47.650599 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 23:44:47.651758 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Sep 11 23:44:47.652414 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 23:44:47.652467 systemd-timesyncd[1403]: Initial clock synchronization to Thu 2025-09-11 23:44:47.335455 UTC. Sep 11 23:44:47.653773 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 23:44:47.655117 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 23:44:47.658687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 23:44:47.660432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 23:44:47.669497 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 23:44:47.674323 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 23:44:47.697176 (ntainerd)[1521]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 23:44:47.697406 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 23:44:47.798425 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Sep 11 23:44:47.828579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:44:47.911335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:44:47.915201 containerd[1521]: time="2025-09-11T23:44:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 23:44:47.915201 containerd[1521]: time="2025-09-11T23:44:47.914264560Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 11 23:44:47.931181 containerd[1521]: time="2025-09-11T23:44:47.931130040Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.44µs" Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931282840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931310320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931455680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931470880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931495280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931540120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931553080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931767320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931781600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931795000Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931802880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 23:44:47.931929 containerd[1521]: time="2025-09-11T23:44:47.931907440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 23:44:47.932170 containerd[1521]: time="2025-09-11T23:44:47.932122160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:44:47.932170 containerd[1521]: time="2025-09-11T23:44:47.932154360Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:44:47.932170 containerd[1521]: time="2025-09-11T23:44:47.932164440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 23:44:47.932232 containerd[1521]: time="2025-09-11T23:44:47.932198880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 23:44:47.932515 containerd[1521]: time="2025-09-11T23:44:47.932472640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 23:44:47.932561 containerd[1521]: time="2025-09-11T23:44:47.932545680Z" level=info msg="metadata content store policy set" policy=shared Sep 11 23:44:47.936254 containerd[1521]: time="2025-09-11T23:44:47.936211280Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 23:44:47.936289 containerd[1521]: time="2025-09-11T23:44:47.936270080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 23:44:47.936289 containerd[1521]: time="2025-09-11T23:44:47.936285800Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 23:44:47.936323 containerd[1521]: time="2025-09-11T23:44:47.936298400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 23:44:47.936323 containerd[1521]: time="2025-09-11T23:44:47.936310480Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 23:44:47.936359 containerd[1521]: time="2025-09-11T23:44:47.936330120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 23:44:47.936359 containerd[1521]: time="2025-09-11T23:44:47.936345880Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 23:44:47.936391 containerd[1521]: time="2025-09-11T23:44:47.936361720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 23:44:47.936391 containerd[1521]: time="2025-09-11T23:44:47.936376160Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 23:44:47.936391 containerd[1521]: time="2025-09-11T23:44:47.936385840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 23:44:47.936480 containerd[1521]: time="2025-09-11T23:44:47.936394640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 23:44:47.936480 containerd[1521]: time="2025-09-11T23:44:47.936406000Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 23:44:47.936546 containerd[1521]: time="2025-09-11T23:44:47.936516520Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 23:44:47.936546 containerd[1521]: time="2025-09-11T23:44:47.936536360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 23:44:47.936607 containerd[1521]: time="2025-09-11T23:44:47.936557720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 23:44:47.936607 containerd[1521]: time="2025-09-11T23:44:47.936569160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 23:44:47.936607 containerd[1521]: time="2025-09-11T23:44:47.936578560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 23:44:47.936607 containerd[1521]: time="2025-09-11T23:44:47.936587960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 23:44:47.936607 containerd[1521]: time="2025-09-11T23:44:47.936599240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 23:44:47.936688 containerd[1521]: time="2025-09-11T23:44:47.936608320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 23:44:47.936688 containerd[1521]: time="2025-09-11T23:44:47.936619160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 23:44:47.936688 containerd[1521]: time="2025-09-11T23:44:47.936628840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 23:44:47.936688 containerd[1521]: time="2025-09-11T23:44:47.936638120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 23:44:47.936854 containerd[1521]: time="2025-09-11T23:44:47.936821120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 23:44:47.936854 containerd[1521]: time="2025-09-11T23:44:47.936844840Z" level=info msg="Start snapshots syncer" Sep 11 23:44:47.936909 containerd[1521]: time="2025-09-11T23:44:47.936871080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 23:44:47.937353 containerd[1521]: time="2025-09-11T23:44:47.937079760Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 23:44:47.937454 containerd[1521]: time="2025-09-11T23:44:47.937396080Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 23:44:47.937874 containerd[1521]: time="2025-09-11T23:44:47.937826720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 23:44:47.938026 containerd[1521]: time="2025-09-11T23:44:47.938003480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938052800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938074680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938087560Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938104520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938119560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938135760Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938181040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938197960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 23:44:47.938252 containerd[1521]: time="2025-09-11T23:44:47.938225760Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 23:44:47.939984 containerd[1521]: time="2025-09-11T23:44:47.939947320Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:44:47.940519 containerd[1521]: time="2025-09-11T23:44:47.940495600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:44:47.940558 containerd[1521]: time="2025-09-11T23:44:47.940523720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:44:47.940558 containerd[1521]: time="2025-09-11T23:44:47.940536720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:44:47.940558 containerd[1521]: time="2025-09-11T23:44:47.940549040Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 23:44:47.940617 containerd[1521]: time="2025-09-11T23:44:47.940563080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 23:44:47.940617 containerd[1521]: time="2025-09-11T23:44:47.940577800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 23:44:47.940683 containerd[1521]: time="2025-09-11T23:44:47.940664760Z" level=info msg="runtime interface created" Sep 11 23:44:47.940707 containerd[1521]: time="2025-09-11T23:44:47.940684080Z" level=info msg="created NRI interface" Sep 11 23:44:47.940707 containerd[1521]: time="2025-09-11T23:44:47.940697880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 23:44:47.940740 containerd[1521]: time="2025-09-11T23:44:47.940716160Z" level=info msg="Connect containerd service" Sep 11 23:44:47.940772 containerd[1521]: time="2025-09-11T23:44:47.940759040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 23:44:47.941670 containerd[1521]: time="2025-09-11T23:44:47.941627440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 23:44:48.003343 tar[1472]: linux-arm64/README.md Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006395200Z" level=info msg="Start subscribing containerd event" Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006476103Z" level=info msg="Start recovering state" Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006563269Z" level=info msg="Start event monitor" Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006576791Z" level=info msg="Start cni network conf syncer for default" Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006583783Z" level=info msg="Start streaming server" Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006593118Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006599764Z" level=info msg="runtime interface starting up..." Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006605526Z" level=info msg="starting plugins..." Sep 11 23:44:48.006774 containerd[1521]: time="2025-09-11T23:44:48.006618088Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 23:44:48.007105 containerd[1521]: time="2025-09-11T23:44:48.007059948Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 23:44:48.007153 containerd[1521]: time="2025-09-11T23:44:48.007137432Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 23:44:48.007231 containerd[1521]: time="2025-09-11T23:44:48.007215032Z" level=info msg="containerd successfully booted in 0.093841s" Sep 11 23:44:48.007271 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 23:44:48.021966 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 23:44:48.503242 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 23:44:48.522926 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 23:44:48.525607 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 23:44:48.545912 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 23:44:48.546126 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 23:44:48.548749 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 23:44:48.578341 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 23:44:48.581101 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 23:44:48.584632 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 11 23:44:48.586007 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 23:44:48.854082 systemd-networkd[1413]: eth0: Gained IPv6LL Sep 11 23:44:48.856144 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 23:44:48.857908 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 23:44:48.860229 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 23:44:48.862732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:44:48.881517 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 23:44:48.895642 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 23:44:48.895868 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 23:44:48.897744 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 23:44:48.899509 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 23:44:49.396044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:44:49.397703 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 23:44:49.399578 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:44:49.399973 systemd[1]: Startup finished in 2.033s (kernel) + 5.331s (initrd) + 3.516s (userspace) = 10.881s. Sep 11 23:44:49.726616 kubelet[1609]: E0911 23:44:49.726519 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:44:49.729058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:44:49.729184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:44:49.729453 systemd[1]: kubelet.service: Consumed 741ms CPU time, 258.2M memory peak. Sep 11 23:44:53.015788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 23:44:53.020550 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:55440.service - OpenSSH per-connection server daemon (10.0.0.1:55440). Sep 11 23:44:53.122899 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 55440 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:44:53.124019 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:44:53.129535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 23:44:53.135252 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 23:44:53.141938 systemd-logind[1456]: New session 1 of user core. Sep 11 23:44:53.154129 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 23:44:53.164541 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 23:44:53.177757 (systemd)[1627]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 23:44:53.179725 systemd-logind[1456]: New session c1 of user core. Sep 11 23:44:53.269009 systemd[1627]: Queued start job for default target default.target. Sep 11 23:44:53.287793 systemd[1627]: Created slice app.slice - User Application Slice. Sep 11 23:44:53.288182 systemd[1627]: Reached target paths.target - Paths. Sep 11 23:44:53.288234 systemd[1627]: Reached target timers.target - Timers. Sep 11 23:44:53.289389 systemd[1627]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 23:44:53.298016 systemd[1627]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 23:44:53.298075 systemd[1627]: Reached target sockets.target - Sockets. Sep 11 23:44:53.298108 systemd[1627]: Reached target basic.target - Basic System. Sep 11 23:44:53.298139 systemd[1627]: Reached target default.target - Main User Target. Sep 11 23:44:53.298162 systemd[1627]: Startup finished in 113ms. Sep 11 23:44:53.298274 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 23:44:53.299453 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 23:44:53.362614 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:55446.service - OpenSSH per-connection server daemon (10.0.0.1:55446). Sep 11 23:44:53.415429 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 55446 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:44:53.416237 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:44:53.421553 systemd-logind[1456]: New session 2 of user core. Sep 11 23:44:53.440091 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 23:44:53.490436 sshd[1641]: Connection closed by 10.0.0.1 port 55446 Sep 11 23:44:53.490913 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Sep 11 23:44:53.501707 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:55446.service: Deactivated successfully. Sep 11 23:44:53.503109 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 23:44:53.505077 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Sep 11 23:44:53.507234 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:55448.service - OpenSSH per-connection server daemon (10.0.0.1:55448). Sep 11 23:44:53.508074 systemd-logind[1456]: Removed session 2. Sep 11 23:44:53.569999 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 55448 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:44:53.571389 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:44:53.575077 systemd-logind[1456]: New session 3 of user core. Sep 11 23:44:53.590025 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 23:44:53.636953 sshd[1650]: Connection closed by 10.0.0.1 port 55448 Sep 11 23:44:53.637460 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Sep 11 23:44:53.646557 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:55448.service: Deactivated successfully. Sep 11 23:44:53.649062 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 23:44:53.650023 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Sep 11 23:44:53.651495 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:55458.service - OpenSSH per-connection server daemon (10.0.0.1:55458). Sep 11 23:44:53.655311 systemd-logind[1456]: Removed session 3. Sep 11 23:44:53.705751 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 55458 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:44:53.709382 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:44:53.712657 systemd-logind[1456]: New session 4 of user core. Sep 11 23:44:53.721039 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 23:44:53.772415 sshd[1659]: Connection closed by 10.0.0.1 port 55458 Sep 11 23:44:53.772298 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 11 23:44:53.781612 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:55458.service: Deactivated successfully. Sep 11 23:44:53.784438 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 23:44:53.785917 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Sep 11 23:44:53.787036 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:55474.service - OpenSSH per-connection server daemon (10.0.0.1:55474). Sep 11 23:44:53.791297 systemd-logind[1456]: Removed session 4. Sep 11 23:44:53.840405 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 55474 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:44:53.841255 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:44:53.846947 systemd-logind[1456]: New session 5 of user core. Sep 11 23:44:53.858101 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 23:44:53.912183 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 23:44:53.912423 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:44:53.933689 sudo[1669]: pam_unix(sudo:session): session closed for user root Sep 11 23:44:53.936091 sshd[1668]: Connection closed by 10.0.0.1 port 55474 Sep 11 23:44:53.935917 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 11 23:44:53.947600 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:55474.service: Deactivated successfully. Sep 11 23:44:53.948839 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 23:44:53.949629 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Sep 11 23:44:53.951749 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:55486.service - OpenSSH per-connection server daemon (10.0.0.1:55486). Sep 11 23:44:53.952398 systemd-logind[1456]: Removed session 5. Sep 11 23:44:54.007500 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 55486 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:44:54.011701 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:44:54.016180 systemd-logind[1456]: New session 6 of user core. Sep 11 23:44:54.023025 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 23:44:54.073184 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 23:44:54.073432 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:44:54.078216 sudo[1680]: pam_unix(sudo:session): session closed for user root Sep 11 23:44:54.082657 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 23:44:54.082935 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:44:54.091008 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:44:54.127725 augenrules[1702]: No rules Sep 11 23:44:54.128280 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:44:54.129918 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:44:54.130763 sudo[1679]: pam_unix(sudo:session): session closed for user root Sep 11 23:44:54.132580 sshd[1678]: Connection closed by 10.0.0.1 port 55486 Sep 11 23:44:54.132947 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 11 23:44:54.140521 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:55486.service: Deactivated successfully. Sep 11 23:44:54.142780 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 23:44:54.144886 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Sep 11 23:44:54.146110 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:55492.service - OpenSSH per-connection server daemon (10.0.0.1:55492). Sep 11 23:44:54.147429 systemd-logind[1456]: Removed session 6. Sep 11 23:44:54.204216 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 55492 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:44:54.205642 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:44:54.215031 systemd-logind[1456]: New session 7 of user core. Sep 11 23:44:54.221075 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 23:44:54.270128 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 23:44:54.270368 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:44:54.532476 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 23:44:54.554257 (dockerd)[1737]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 23:44:54.743351 dockerd[1737]: time="2025-09-11T23:44:54.743291242Z" level=info msg="Starting up" Sep 11 23:44:54.744060 dockerd[1737]: time="2025-09-11T23:44:54.744040720Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 23:44:54.753187 dockerd[1737]: time="2025-09-11T23:44:54.753151338Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 11 23:44:54.915372 dockerd[1737]: time="2025-09-11T23:44:54.915276246Z" level=info msg="Loading containers: start." Sep 11 23:44:54.934913 kernel: Initializing XFRM netlink socket Sep 11 23:44:55.133767 systemd-networkd[1413]: docker0: Link UP Sep 11 23:44:55.136772 dockerd[1737]: time="2025-09-11T23:44:55.136722506Z" level=info msg="Loading containers: done." Sep 11 23:44:55.148980 dockerd[1737]: time="2025-09-11T23:44:55.148936118Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 23:44:55.149094 dockerd[1737]: time="2025-09-11T23:44:55.149012235Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 11 23:44:55.149094 dockerd[1737]: time="2025-09-11T23:44:55.149084454Z" level=info msg="Initializing buildkit" Sep 11 23:44:55.172178 dockerd[1737]: time="2025-09-11T23:44:55.172040754Z" level=info msg="Completed buildkit initialization" Sep 11 23:44:55.176655 dockerd[1737]: time="2025-09-11T23:44:55.176618122Z" level=info msg="Daemon has completed initialization" Sep 11 23:44:55.176814 dockerd[1737]: time="2025-09-11T23:44:55.176691129Z" level=info msg="API listen on /run/docker.sock" Sep 11 23:44:55.176865 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 23:44:55.743793 containerd[1521]: time="2025-09-11T23:44:55.743345536Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 11 23:44:57.383054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697292535.mount: Deactivated successfully. Sep 11 23:44:58.534906 containerd[1521]: time="2025-09-11T23:44:58.534844065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:44:58.535710 containerd[1521]: time="2025-09-11T23:44:58.535679984Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 11 23:44:58.536343 containerd[1521]: time="2025-09-11T23:44:58.536317116Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:44:58.539860 containerd[1521]: time="2025-09-11T23:44:58.539828784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:44:58.540685 containerd[1521]: time="2025-09-11T23:44:58.540503797Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.797085546s" Sep 11 23:44:58.540685 containerd[1521]: time="2025-09-11T23:44:58.540548328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 11 23:44:58.541426 containerd[1521]: time="2025-09-11T23:44:58.541233237Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 11 23:44:59.584318 containerd[1521]: time="2025-09-11T23:44:59.584269944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:44:59.584842 containerd[1521]: time="2025-09-11T23:44:59.584810372Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 11 23:44:59.585803 containerd[1521]: time="2025-09-11T23:44:59.585762572Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:44:59.588798 containerd[1521]: time="2025-09-11T23:44:59.588758528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:44:59.589912 containerd[1521]: time="2025-09-11T23:44:59.589693566Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.04842183s" Sep 11 23:44:59.589912 containerd[1521]: time="2025-09-11T23:44:59.589727771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 11 23:44:59.590332 containerd[1521]: time="2025-09-11T23:44:59.590313423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 11 23:44:59.979540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 23:44:59.981011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:45:00.103341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:45:00.107620 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:45:00.142740 kubelet[2023]: E0911 23:45:00.142690 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:45:00.145537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:45:00.145668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:45:00.147003 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.5M memory peak. Sep 11 23:45:00.850641 containerd[1521]: time="2025-09-11T23:45:00.849981136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:00.851021 containerd[1521]: time="2025-09-11T23:45:00.850586389Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 11 23:45:00.851427 containerd[1521]: time="2025-09-11T23:45:00.851382468Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:00.853719 containerd[1521]: time="2025-09-11T23:45:00.853683604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:00.855527 containerd[1521]: time="2025-09-11T23:45:00.855494602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.265086163s" Sep 11 23:45:00.855527 containerd[1521]: time="2025-09-11T23:45:00.855526109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 11 23:45:00.856053 containerd[1521]: time="2025-09-11T23:45:00.856029541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 11 23:45:01.838337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189789373.mount: Deactivated successfully. Sep 11 23:45:02.053027 containerd[1521]: time="2025-09-11T23:45:02.052974785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:02.053989 containerd[1521]: time="2025-09-11T23:45:02.053953452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 11 23:45:02.055897 containerd[1521]: time="2025-09-11T23:45:02.055047688Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:02.057345 containerd[1521]: time="2025-09-11T23:45:02.057317619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:02.058160 containerd[1521]: time="2025-09-11T23:45:02.058135434Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.202071751s" Sep 11 23:45:02.058267 containerd[1521]: time="2025-09-11T23:45:02.058240270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 11 23:45:02.058744 containerd[1521]: time="2025-09-11T23:45:02.058723898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 23:45:02.605388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701379173.mount: Deactivated successfully. Sep 11 23:45:03.441424 containerd[1521]: time="2025-09-11T23:45:03.441369666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:03.442047 containerd[1521]: time="2025-09-11T23:45:03.442012174Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 11 23:45:03.442798 containerd[1521]: time="2025-09-11T23:45:03.442739307Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:03.445832 containerd[1521]: time="2025-09-11T23:45:03.445798671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:03.446906 containerd[1521]: time="2025-09-11T23:45:03.446859093Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.388108394s" Sep 11 23:45:03.446977 containerd[1521]: time="2025-09-11T23:45:03.446909900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 11 23:45:03.447833 containerd[1521]: time="2025-09-11T23:45:03.447797452Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 23:45:03.921406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149760358.mount: Deactivated successfully. Sep 11 23:45:03.925076 containerd[1521]: time="2025-09-11T23:45:03.925029690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:45:03.925518 containerd[1521]: time="2025-09-11T23:45:03.925484248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 11 23:45:03.926603 containerd[1521]: time="2025-09-11T23:45:03.926564962Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:45:03.928601 containerd[1521]: time="2025-09-11T23:45:03.928563823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:45:03.929472 containerd[1521]: time="2025-09-11T23:45:03.929439399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 481.612306ms" Sep 11 23:45:03.929511 containerd[1521]: time="2025-09-11T23:45:03.929472342Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 11 23:45:03.930012 containerd[1521]: time="2025-09-11T23:45:03.929988887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 11 23:45:04.370700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457509211.mount: Deactivated successfully. Sep 11 23:45:06.244908 containerd[1521]: time="2025-09-11T23:45:06.244215688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:06.244908 containerd[1521]: time="2025-09-11T23:45:06.244653555Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 11 23:45:06.245659 containerd[1521]: time="2025-09-11T23:45:06.245630685Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:06.248078 containerd[1521]: time="2025-09-11T23:45:06.248050152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:06.249953 containerd[1521]: time="2025-09-11T23:45:06.249923860Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.319903766s" Sep 11 23:45:06.250021 containerd[1521]: time="2025-09-11T23:45:06.249956901Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 11 23:45:10.396102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 23:45:10.397528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:45:10.642010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:45:10.645639 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:45:10.682569 kubelet[2186]: E0911 23:45:10.682441 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:45:10.685318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:45:10.685543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:45:10.685915 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.6M memory peak. Sep 11 23:45:10.778428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:45:10.778734 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.6M memory peak. Sep 11 23:45:10.780656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:45:10.804448 systemd[1]: Reload requested from client PID 2201 ('systemctl') (unit session-7.scope)... Sep 11 23:45:10.804470 systemd[1]: Reloading... Sep 11 23:45:10.871920 zram_generator::config[2244]: No configuration found. Sep 11 23:45:11.120508 systemd[1]: Reloading finished in 315 ms. Sep 11 23:45:11.184293 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 23:45:11.184366 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 23:45:11.184579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:45:11.184624 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.2M memory peak. Sep 11 23:45:11.186005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:45:11.397207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:45:11.400716 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:45:11.433274 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:45:11.433274 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 23:45:11.433274 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:45:11.433568 kubelet[2289]: I0911 23:45:11.433317 2289 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:45:12.405154 kubelet[2289]: I0911 23:45:12.405117 2289 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 23:45:12.405297 kubelet[2289]: I0911 23:45:12.405287 2289 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:45:12.405639 kubelet[2289]: I0911 23:45:12.405621 2289 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 23:45:12.425813 kubelet[2289]: I0911 23:45:12.425772 2289 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:45:12.426026 kubelet[2289]: E0911 23:45:12.426004 2289 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:45:12.433087 kubelet[2289]: I0911 23:45:12.433065 2289 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:45:12.436909 kubelet[2289]: I0911 23:45:12.436873 2289 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:45:12.437559 kubelet[2289]: I0911 23:45:12.437487 2289 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:45:12.437724 kubelet[2289]: I0911 23:45:12.437544 2289 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:45:12.437842 kubelet[2289]: I0911 23:45:12.437830 2289 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:45:12.437842 kubelet[2289]: I0911 23:45:12.437842 2289 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 23:45:12.438073 kubelet[2289]: I0911 23:45:12.438044 2289 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:45:12.440515 kubelet[2289]: I0911 23:45:12.440393 2289 kubelet.go:446] "Attempting to sync node with API server" Sep 11 23:45:12.440515 kubelet[2289]: I0911 23:45:12.440423 2289 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:45:12.440515 kubelet[2289]: I0911 23:45:12.440446 2289 kubelet.go:352] "Adding apiserver pod source" Sep 11 23:45:12.440515 kubelet[2289]: I0911 23:45:12.440464 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:45:12.445008 kubelet[2289]: W0911 23:45:12.444774 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 11 23:45:12.445008 kubelet[2289]: W0911 23:45:12.444840 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 11 23:45:12.445008 kubelet[2289]: E0911 23:45:12.444908 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:45:12.445008 kubelet[2289]: E0911 23:45:12.444857 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:45:12.445263 kubelet[2289]: I0911 23:45:12.445235 2289 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 23:45:12.446984 kubelet[2289]: I0911 23:45:12.446034 2289 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 23:45:12.446984 kubelet[2289]: W0911 23:45:12.446183 2289 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 23:45:12.447730 kubelet[2289]: I0911 23:45:12.447704 2289 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 23:45:12.447825 kubelet[2289]: I0911 23:45:12.447746 2289 server.go:1287] "Started kubelet" Sep 11 23:45:12.448054 kubelet[2289]: I0911 23:45:12.448005 2289 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:45:12.448379 kubelet[2289]: I0911 23:45:12.448333 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:45:12.448678 kubelet[2289]: I0911 23:45:12.448657 2289 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:45:12.448838 kubelet[2289]: I0911 23:45:12.448809 2289 server.go:479] "Adding debug handlers to kubelet server" Sep 11 23:45:12.450331 kubelet[2289]: E0911 23:45:12.450103 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18645f18467c0c46 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 23:45:12.447724614 +0000 UTC m=+1.044305759,LastTimestamp:2025-09-11 23:45:12.447724614 +0000 UTC m=+1.044305759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 23:45:12.451656 kubelet[2289]: I0911 23:45:12.451627 2289 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:45:12.451990 kubelet[2289]: I0911 23:45:12.451974 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:45:12.452079 kubelet[2289]: E0911 23:45:12.452013 2289 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:45:12.452250 kubelet[2289]: I0911 23:45:12.452226 2289 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 23:45:12.452321 kubelet[2289]: I0911 23:45:12.452305 2289 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 23:45:12.452321 kubelet[2289]: E0911 23:45:12.452308 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:45:12.452375 kubelet[2289]: I0911 23:45:12.452351 2289 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:45:12.452672 kubelet[2289]: E0911 23:45:12.452634 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Sep 11 23:45:12.452672 kubelet[2289]: W0911 23:45:12.452632 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 11 23:45:12.452751 kubelet[2289]: E0911 23:45:12.452682 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:45:12.453090 kubelet[2289]: I0911 23:45:12.453067 2289 factory.go:221] Registration of the systemd container factory successfully Sep 11 23:45:12.453179 kubelet[2289]: I0911 23:45:12.453160 2289 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:45:12.454390 kubelet[2289]: I0911 23:45:12.454367 2289 factory.go:221] Registration of the containerd container factory successfully Sep 11 23:45:12.464661 kubelet[2289]: I0911 23:45:12.464631 2289 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 23:45:12.464661 kubelet[2289]: I0911 23:45:12.464647 2289 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 23:45:12.464661 kubelet[2289]: I0911 23:45:12.464663 2289 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:45:12.467324 kubelet[2289]: I0911 23:45:12.467294 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 23:45:12.468397 kubelet[2289]: I0911 23:45:12.468377 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 23:45:12.468506 kubelet[2289]: I0911 23:45:12.468494 2289 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 23:45:12.468569 kubelet[2289]: I0911 23:45:12.468556 2289 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 23:45:12.468631 kubelet[2289]: I0911 23:45:12.468621 2289 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 23:45:12.468730 kubelet[2289]: E0911 23:45:12.468708 2289 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:45:12.500822 kubelet[2289]: I0911 23:45:12.500775 2289 policy_none.go:49] "None policy: Start" Sep 11 23:45:12.500822 kubelet[2289]: I0911 23:45:12.500815 2289 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 23:45:12.500822 kubelet[2289]: I0911 23:45:12.500828 2289 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:45:12.501124 kubelet[2289]: W0911 23:45:12.501066 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 11 23:45:12.501160 kubelet[2289]: E0911 23:45:12.501124 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:45:12.507528 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 23:45:12.519281 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 23:45:12.521997 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 23:45:12.540722 kubelet[2289]: I0911 23:45:12.540682 2289 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 23:45:12.540936 kubelet[2289]: I0911 23:45:12.540914 2289 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:45:12.540985 kubelet[2289]: I0911 23:45:12.540933 2289 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:45:12.541479 kubelet[2289]: I0911 23:45:12.541282 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:45:12.541952 kubelet[2289]: E0911 23:45:12.541930 2289 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 23:45:12.542053 kubelet[2289]: E0911 23:45:12.541970 2289 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 23:45:12.576147 systemd[1]: Created slice kubepods-burstable-pod62bf92eb55477fec747c29fa808c9c9b.slice - libcontainer container kubepods-burstable-pod62bf92eb55477fec747c29fa808c9c9b.slice. Sep 11 23:45:12.600226 kubelet[2289]: E0911 23:45:12.600187 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:12.604068 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 11 23:45:12.605640 kubelet[2289]: E0911 23:45:12.605606 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:12.607903 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 11 23:45:12.609319 kubelet[2289]: E0911 23:45:12.609280 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:12.642313 kubelet[2289]: I0911 23:45:12.642285 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:45:12.642731 kubelet[2289]: E0911 23:45:12.642704 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Sep 11 23:45:12.653114 kubelet[2289]: E0911 23:45:12.653079 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Sep 11 23:45:12.653321 kubelet[2289]: I0911 23:45:12.653188 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:12.653321 kubelet[2289]: I0911 23:45:12.653234 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:45:12.653321 kubelet[2289]: I0911 23:45:12.653255 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:12.653321 kubelet[2289]: I0911 23:45:12.653271 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:12.653321 kubelet[2289]: I0911 23:45:12.653289 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:12.653467 kubelet[2289]: I0911 23:45:12.653305 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:12.653467 kubelet[2289]: I0911 23:45:12.653337 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:12.653467 kubelet[2289]: I0911 23:45:12.653372 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:12.653467 kubelet[2289]: I0911 23:45:12.653399 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:12.844932 kubelet[2289]: I0911 23:45:12.844753 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:45:12.845397 kubelet[2289]: E0911 23:45:12.845369 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Sep 11 23:45:12.901489 containerd[1521]: time="2025-09-11T23:45:12.901421219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62bf92eb55477fec747c29fa808c9c9b,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:12.907127 containerd[1521]: time="2025-09-11T23:45:12.907096526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:12.910683 containerd[1521]: time="2025-09-11T23:45:12.910652084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:12.921213 containerd[1521]: time="2025-09-11T23:45:12.921170198Z" level=info msg="connecting to shim ed0899202480028149dcc68c0f41cdd4c390774557541a19c941e95cb3adf8ee" address="unix:///run/containerd/s/c92eb1fad3ba60ca85427075573857c097df8bb32581da6396f69ab110d366bf" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:12.942517 containerd[1521]: time="2025-09-11T23:45:12.942478587Z" level=info msg="connecting to shim 798a8bd032d29db14dbbb8309a190ee40e39662cc56d7f2fbbdf5ca1d65cdb3f" address="unix:///run/containerd/s/645c572eb764fc2f8fd8dee042ae34aba7bf66dd87852ddc1b20abd65d85f861" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:12.944181 systemd[1]: Started cri-containerd-ed0899202480028149dcc68c0f41cdd4c390774557541a19c941e95cb3adf8ee.scope - libcontainer container ed0899202480028149dcc68c0f41cdd4c390774557541a19c941e95cb3adf8ee. Sep 11 23:45:12.946224 containerd[1521]: time="2025-09-11T23:45:12.946151836Z" level=info msg="connecting to shim 2172ae1d1e5039b302aeca2ffd373470911cf8a525ec80bc6e47e85ad73ad2f0" address="unix:///run/containerd/s/26a97d7d234cee7f237392b0c142c6f4323c14c32eef41f9bea2fc1ed29b7db1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:12.972097 systemd[1]: Started cri-containerd-798a8bd032d29db14dbbb8309a190ee40e39662cc56d7f2fbbdf5ca1d65cdb3f.scope - libcontainer container 798a8bd032d29db14dbbb8309a190ee40e39662cc56d7f2fbbdf5ca1d65cdb3f. Sep 11 23:45:12.975605 systemd[1]: Started cri-containerd-2172ae1d1e5039b302aeca2ffd373470911cf8a525ec80bc6e47e85ad73ad2f0.scope - libcontainer container 2172ae1d1e5039b302aeca2ffd373470911cf8a525ec80bc6e47e85ad73ad2f0. Sep 11 23:45:12.987635 containerd[1521]: time="2025-09-11T23:45:12.987291272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62bf92eb55477fec747c29fa808c9c9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed0899202480028149dcc68c0f41cdd4c390774557541a19c941e95cb3adf8ee\"" Sep 11 23:45:12.991389 containerd[1521]: time="2025-09-11T23:45:12.991356610Z" level=info msg="CreateContainer within sandbox \"ed0899202480028149dcc68c0f41cdd4c390774557541a19c941e95cb3adf8ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 23:45:13.003663 containerd[1521]: time="2025-09-11T23:45:13.003610201Z" level=info msg="Container 8a7c7cc78dfe89a012627d0688ed4805aa49d41a57dfde09bb36529c65b69bfe: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:13.011859 containerd[1521]: time="2025-09-11T23:45:13.011784014Z" level=info msg="CreateContainer within sandbox \"ed0899202480028149dcc68c0f41cdd4c390774557541a19c941e95cb3adf8ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a7c7cc78dfe89a012627d0688ed4805aa49d41a57dfde09bb36529c65b69bfe\"" Sep 11 23:45:13.012874 containerd[1521]: time="2025-09-11T23:45:13.012490619Z" level=info msg="StartContainer for \"8a7c7cc78dfe89a012627d0688ed4805aa49d41a57dfde09bb36529c65b69bfe\"" Sep 11 23:45:13.012874 containerd[1521]: time="2025-09-11T23:45:13.012791436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2172ae1d1e5039b302aeca2ffd373470911cf8a525ec80bc6e47e85ad73ad2f0\"" Sep 11 23:45:13.013712 containerd[1521]: time="2025-09-11T23:45:13.013687414Z" level=info msg="connecting to shim 8a7c7cc78dfe89a012627d0688ed4805aa49d41a57dfde09bb36529c65b69bfe" address="unix:///run/containerd/s/c92eb1fad3ba60ca85427075573857c097df8bb32581da6396f69ab110d366bf" protocol=ttrpc version=3 Sep 11 23:45:13.015346 containerd[1521]: time="2025-09-11T23:45:13.015318318Z" level=info msg="CreateContainer within sandbox \"2172ae1d1e5039b302aeca2ffd373470911cf8a525ec80bc6e47e85ad73ad2f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 23:45:13.024800 containerd[1521]: time="2025-09-11T23:45:13.024134626Z" level=info msg="Container 579a99d72aaa00413fa5398c5dfd4d4c3e3b6f7d37721ea8069502e75b26a529: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:13.024800 containerd[1521]: time="2025-09-11T23:45:13.024321204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"798a8bd032d29db14dbbb8309a190ee40e39662cc56d7f2fbbdf5ca1d65cdb3f\"" Sep 11 23:45:13.029857 containerd[1521]: time="2025-09-11T23:45:13.029613753Z" level=info msg="CreateContainer within sandbox \"798a8bd032d29db14dbbb8309a190ee40e39662cc56d7f2fbbdf5ca1d65cdb3f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 23:45:13.034370 containerd[1521]: time="2025-09-11T23:45:13.034331990Z" level=info msg="CreateContainer within sandbox \"2172ae1d1e5039b302aeca2ffd373470911cf8a525ec80bc6e47e85ad73ad2f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"579a99d72aaa00413fa5398c5dfd4d4c3e3b6f7d37721ea8069502e75b26a529\"" Sep 11 23:45:13.034806 containerd[1521]: time="2025-09-11T23:45:13.034778801Z" level=info msg="StartContainer for \"579a99d72aaa00413fa5398c5dfd4d4c3e3b6f7d37721ea8069502e75b26a529\"" Sep 11 23:45:13.035764 containerd[1521]: time="2025-09-11T23:45:13.035727706Z" level=info msg="connecting to shim 579a99d72aaa00413fa5398c5dfd4d4c3e3b6f7d37721ea8069502e75b26a529" address="unix:///run/containerd/s/26a97d7d234cee7f237392b0c142c6f4323c14c32eef41f9bea2fc1ed29b7db1" protocol=ttrpc version=3 Sep 11 23:45:13.038064 systemd[1]: Started cri-containerd-8a7c7cc78dfe89a012627d0688ed4805aa49d41a57dfde09bb36529c65b69bfe.scope - libcontainer container 8a7c7cc78dfe89a012627d0688ed4805aa49d41a57dfde09bb36529c65b69bfe. Sep 11 23:45:13.039232 containerd[1521]: time="2025-09-11T23:45:13.039202214Z" level=info msg="Container 4a914d70699ecd6a307074c99ce145228166bad3063330636b0e29ee1bfc82b2: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:13.046710 containerd[1521]: time="2025-09-11T23:45:13.046661872Z" level=info msg="CreateContainer within sandbox \"798a8bd032d29db14dbbb8309a190ee40e39662cc56d7f2fbbdf5ca1d65cdb3f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a914d70699ecd6a307074c99ce145228166bad3063330636b0e29ee1bfc82b2\"" Sep 11 23:45:13.047358 containerd[1521]: time="2025-09-11T23:45:13.047328733Z" level=info msg="StartContainer for \"4a914d70699ecd6a307074c99ce145228166bad3063330636b0e29ee1bfc82b2\"" Sep 11 23:45:13.049500 containerd[1521]: time="2025-09-11T23:45:13.049470997Z" level=info msg="connecting to shim 4a914d70699ecd6a307074c99ce145228166bad3063330636b0e29ee1bfc82b2" address="unix:///run/containerd/s/645c572eb764fc2f8fd8dee042ae34aba7bf66dd87852ddc1b20abd65d85f861" protocol=ttrpc version=3 Sep 11 23:45:13.054148 kubelet[2289]: E0911 23:45:13.054115 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Sep 11 23:45:13.057028 systemd[1]: Started cri-containerd-579a99d72aaa00413fa5398c5dfd4d4c3e3b6f7d37721ea8069502e75b26a529.scope - libcontainer container 579a99d72aaa00413fa5398c5dfd4d4c3e3b6f7d37721ea8069502e75b26a529. Sep 11 23:45:13.072100 systemd[1]: Started cri-containerd-4a914d70699ecd6a307074c99ce145228166bad3063330636b0e29ee1bfc82b2.scope - libcontainer container 4a914d70699ecd6a307074c99ce145228166bad3063330636b0e29ee1bfc82b2. Sep 11 23:45:13.096844 containerd[1521]: time="2025-09-11T23:45:13.095408965Z" level=info msg="StartContainer for \"8a7c7cc78dfe89a012627d0688ed4805aa49d41a57dfde09bb36529c65b69bfe\" returns successfully" Sep 11 23:45:13.110821 containerd[1521]: time="2025-09-11T23:45:13.110786237Z" level=info msg="StartContainer for \"579a99d72aaa00413fa5398c5dfd4d4c3e3b6f7d37721ea8069502e75b26a529\" returns successfully" Sep 11 23:45:13.119074 containerd[1521]: time="2025-09-11T23:45:13.119032228Z" level=info msg="StartContainer for \"4a914d70699ecd6a307074c99ce145228166bad3063330636b0e29ee1bfc82b2\" returns successfully" Sep 11 23:45:13.247473 kubelet[2289]: I0911 23:45:13.247448 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:45:13.478110 kubelet[2289]: E0911 23:45:13.477389 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:13.480001 kubelet[2289]: E0911 23:45:13.479957 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:13.482678 kubelet[2289]: E0911 23:45:13.482561 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:14.485430 kubelet[2289]: E0911 23:45:14.485271 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:14.485733 kubelet[2289]: E0911 23:45:14.485468 2289 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:45:15.081127 kubelet[2289]: E0911 23:45:15.081079 2289 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 23:45:15.267496 kubelet[2289]: I0911 23:45:15.267409 2289 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 23:45:15.353398 kubelet[2289]: I0911 23:45:15.353149 2289 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 23:45:15.360210 kubelet[2289]: E0911 23:45:15.360169 2289 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 23:45:15.360210 kubelet[2289]: I0911 23:45:15.360202 2289 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:15.362508 kubelet[2289]: E0911 23:45:15.362294 2289 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:15.362508 kubelet[2289]: I0911 23:45:15.362321 2289 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:15.364090 kubelet[2289]: E0911 23:45:15.364063 2289 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:15.441426 kubelet[2289]: I0911 23:45:15.441386 2289 apiserver.go:52] "Watching apiserver" Sep 11 23:45:15.452560 kubelet[2289]: I0911 23:45:15.452508 2289 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 23:45:17.335623 systemd[1]: Reload requested from client PID 2564 ('systemctl') (unit session-7.scope)... Sep 11 23:45:17.335648 systemd[1]: Reloading... Sep 11 23:45:17.405955 zram_generator::config[2607]: No configuration found. Sep 11 23:45:17.568125 systemd[1]: Reloading finished in 232 ms. Sep 11 23:45:17.590130 kubelet[2289]: I0911 23:45:17.590029 2289 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:45:17.590225 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:45:17.603786 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 23:45:17.604077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:45:17.604129 systemd[1]: kubelet.service: Consumed 1.405s CPU time, 127.5M memory peak. Sep 11 23:45:17.605645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:45:17.738015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:45:17.741652 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:45:17.788113 kubelet[2649]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:45:17.788113 kubelet[2649]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 23:45:17.788113 kubelet[2649]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:45:17.788452 kubelet[2649]: I0911 23:45:17.788172 2649 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:45:17.793622 kubelet[2649]: I0911 23:45:17.793582 2649 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 23:45:17.793622 kubelet[2649]: I0911 23:45:17.793614 2649 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:45:17.793865 kubelet[2649]: I0911 23:45:17.793838 2649 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 23:45:17.795130 kubelet[2649]: I0911 23:45:17.795103 2649 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 23:45:17.797485 kubelet[2649]: I0911 23:45:17.797385 2649 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:45:17.802644 kubelet[2649]: I0911 23:45:17.802623 2649 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:45:17.805116 kubelet[2649]: I0911 23:45:17.805094 2649 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:45:17.805312 kubelet[2649]: I0911 23:45:17.805287 2649 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:45:17.805462 kubelet[2649]: I0911 23:45:17.805309 2649 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:45:17.805541 kubelet[2649]: I0911 23:45:17.805471 2649 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:45:17.805541 kubelet[2649]: I0911 23:45:17.805480 2649 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 23:45:17.805541 kubelet[2649]: I0911 23:45:17.805520 2649 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:45:17.805648 kubelet[2649]: I0911 23:45:17.805636 2649 kubelet.go:446] "Attempting to sync node with API server" Sep 11 23:45:17.805673 kubelet[2649]: I0911 23:45:17.805653 2649 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:45:17.805695 kubelet[2649]: I0911 23:45:17.805679 2649 kubelet.go:352] "Adding apiserver pod source" Sep 11 23:45:17.805695 kubelet[2649]: I0911 23:45:17.805690 2649 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:45:17.806794 kubelet[2649]: I0911 23:45:17.806388 2649 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 23:45:17.809003 kubelet[2649]: I0911 23:45:17.808982 2649 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 23:45:17.809420 kubelet[2649]: I0911 23:45:17.809404 2649 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 23:45:17.809460 kubelet[2649]: I0911 23:45:17.809440 2649 server.go:1287] "Started kubelet" Sep 11 23:45:17.809991 kubelet[2649]: I0911 23:45:17.809959 2649 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:45:17.810285 kubelet[2649]: I0911 23:45:17.810215 2649 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:45:17.811007 kubelet[2649]: I0911 23:45:17.810866 2649 server.go:479] "Adding debug handlers to kubelet server" Sep 11 23:45:17.811078 kubelet[2649]: I0911 23:45:17.811057 2649 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:45:17.817980 kubelet[2649]: I0911 23:45:17.817911 2649 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:45:17.820368 kubelet[2649]: I0911 23:45:17.819026 2649 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:45:17.820940 kubelet[2649]: E0911 23:45:17.820425 2649 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:45:17.820940 kubelet[2649]: I0911 23:45:17.820459 2649 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 23:45:17.822588 kubelet[2649]: I0911 23:45:17.822548 2649 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 23:45:17.822827 kubelet[2649]: I0911 23:45:17.822807 2649 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:45:17.824797 kubelet[2649]: I0911 23:45:17.824756 2649 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:45:17.827711 kubelet[2649]: I0911 23:45:17.827691 2649 factory.go:221] Registration of the containerd container factory successfully Sep 11 23:45:17.828219 kubelet[2649]: I0911 23:45:17.828208 2649 factory.go:221] Registration of the systemd container factory successfully Sep 11 23:45:17.830377 kubelet[2649]: E0911 23:45:17.830341 2649 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:45:17.835945 kubelet[2649]: I0911 23:45:17.835916 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 23:45:17.837268 kubelet[2649]: I0911 23:45:17.837146 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 23:45:17.837268 kubelet[2649]: I0911 23:45:17.837190 2649 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 23:45:17.837268 kubelet[2649]: I0911 23:45:17.837209 2649 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 23:45:17.837268 kubelet[2649]: I0911 23:45:17.837215 2649 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 23:45:17.837714 kubelet[2649]: E0911 23:45:17.837691 2649 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:45:17.861348 kubelet[2649]: I0911 23:45:17.861251 2649 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 23:45:17.861348 kubelet[2649]: I0911 23:45:17.861271 2649 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 23:45:17.861348 kubelet[2649]: I0911 23:45:17.861291 2649 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:45:17.861484 kubelet[2649]: I0911 23:45:17.861430 2649 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 23:45:17.861484 kubelet[2649]: I0911 23:45:17.861440 2649 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 23:45:17.861484 kubelet[2649]: I0911 23:45:17.861455 2649 policy_none.go:49] "None policy: Start" Sep 11 23:45:17.861484 kubelet[2649]: I0911 23:45:17.861463 2649 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 23:45:17.861484 kubelet[2649]: I0911 23:45:17.861471 2649 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:45:17.861587 kubelet[2649]: I0911 23:45:17.861560 2649 state_mem.go:75] "Updated machine memory state" Sep 11 23:45:17.865358 kubelet[2649]: I0911 23:45:17.865333 2649 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 23:45:17.865871 kubelet[2649]: I0911 23:45:17.865483 2649 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:45:17.865871 kubelet[2649]: I0911 23:45:17.865495 2649 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:45:17.865871 kubelet[2649]: I0911 23:45:17.865665 2649 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:45:17.868211 kubelet[2649]: E0911 23:45:17.868031 2649 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 23:45:17.938655 kubelet[2649]: I0911 23:45:17.938617 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:17.938655 kubelet[2649]: I0911 23:45:17.938649 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 23:45:17.938969 kubelet[2649]: I0911 23:45:17.938789 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:17.969174 kubelet[2649]: I0911 23:45:17.969146 2649 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:45:17.976056 kubelet[2649]: I0911 23:45:17.976031 2649 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 23:45:17.976152 kubelet[2649]: I0911 23:45:17.976100 2649 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 23:45:18.023855 kubelet[2649]: I0911 23:45:18.023783 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:18.024003 kubelet[2649]: I0911 23:45:18.023866 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:18.024003 kubelet[2649]: I0911 23:45:18.023901 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:18.024003 kubelet[2649]: I0911 23:45:18.023923 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:18.024003 kubelet[2649]: I0911 23:45:18.023938 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:45:18.024003 kubelet[2649]: I0911 23:45:18.023952 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:18.024121 kubelet[2649]: I0911 23:45:18.023966 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:18.024121 kubelet[2649]: I0911 23:45:18.023982 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:45:18.024121 kubelet[2649]: I0911 23:45:18.023996 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:18.806942 kubelet[2649]: I0911 23:45:18.806445 2649 apiserver.go:52] "Watching apiserver" Sep 11 23:45:18.823258 kubelet[2649]: I0911 23:45:18.823206 2649 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 23:45:18.851721 kubelet[2649]: I0911 23:45:18.851322 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 23:45:18.851892 kubelet[2649]: I0911 23:45:18.851754 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:18.856452 kubelet[2649]: E0911 23:45:18.856428 2649 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 23:45:18.859922 kubelet[2649]: E0911 23:45:18.859892 2649 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 23:45:18.874316 kubelet[2649]: I0911 23:45:18.874068 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8740542580000001 podStartE2EDuration="1.874054258s" podCreationTimestamp="2025-09-11 23:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:45:18.873334657 +0000 UTC m=+1.128479712" watchObservedRunningTime="2025-09-11 23:45:18.874054258 +0000 UTC m=+1.129199313" Sep 11 23:45:18.880718 kubelet[2649]: I0911 23:45:18.880549 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8805362209999998 podStartE2EDuration="1.880536221s" podCreationTimestamp="2025-09-11 23:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:45:18.880375137 +0000 UTC m=+1.135520192" watchObservedRunningTime="2025-09-11 23:45:18.880536221 +0000 UTC m=+1.135681236" Sep 11 23:45:18.898902 kubelet[2649]: I0911 23:45:18.898841 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.898827062 podStartE2EDuration="1.898827062s" podCreationTimestamp="2025-09-11 23:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:45:18.890630017 +0000 UTC m=+1.145775072" watchObservedRunningTime="2025-09-11 23:45:18.898827062 +0000 UTC m=+1.153972117" Sep 11 23:45:24.012420 kubelet[2649]: I0911 23:45:24.011412 2649 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 23:45:24.012420 kubelet[2649]: I0911 23:45:24.011802 2649 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 23:45:24.012807 containerd[1521]: time="2025-09-11T23:45:24.011658264Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 23:45:24.649682 systemd[1]: Created slice kubepods-besteffort-pod644e89ab_bd16_4d66_a7b8_c7a822cb5a60.slice - libcontainer container kubepods-besteffort-pod644e89ab_bd16_4d66_a7b8_c7a822cb5a60.slice. Sep 11 23:45:24.669929 kubelet[2649]: I0911 23:45:24.669870 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/644e89ab-bd16-4d66-a7b8-c7a822cb5a60-kube-proxy\") pod \"kube-proxy-l687n\" (UID: \"644e89ab-bd16-4d66-a7b8-c7a822cb5a60\") " pod="kube-system/kube-proxy-l687n" Sep 11 23:45:24.669929 kubelet[2649]: I0911 23:45:24.669935 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/644e89ab-bd16-4d66-a7b8-c7a822cb5a60-xtables-lock\") pod \"kube-proxy-l687n\" (UID: \"644e89ab-bd16-4d66-a7b8-c7a822cb5a60\") " pod="kube-system/kube-proxy-l687n" Sep 11 23:45:24.670071 kubelet[2649]: I0911 23:45:24.669957 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/644e89ab-bd16-4d66-a7b8-c7a822cb5a60-lib-modules\") pod \"kube-proxy-l687n\" (UID: \"644e89ab-bd16-4d66-a7b8-c7a822cb5a60\") " pod="kube-system/kube-proxy-l687n" Sep 11 23:45:24.670071 kubelet[2649]: I0911 23:45:24.669972 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq8vd\" (UniqueName: \"kubernetes.io/projected/644e89ab-bd16-4d66-a7b8-c7a822cb5a60-kube-api-access-gq8vd\") pod \"kube-proxy-l687n\" (UID: \"644e89ab-bd16-4d66-a7b8-c7a822cb5a60\") " pod="kube-system/kube-proxy-l687n" Sep 11 23:45:24.961657 containerd[1521]: time="2025-09-11T23:45:24.961613809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l687n,Uid:644e89ab-bd16-4d66-a7b8-c7a822cb5a60,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:24.982008 containerd[1521]: time="2025-09-11T23:45:24.981966977Z" level=info msg="connecting to shim 8bdaf655742c1714476df3901ec2c0a485e680f66e2ac03d56249072e62c5709" address="unix:///run/containerd/s/b5f9bc3a7fdf75d0457e500174ffb2af5d09539f2fd8ba4810fcd363c71e24a1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:25.005033 systemd[1]: Started cri-containerd-8bdaf655742c1714476df3901ec2c0a485e680f66e2ac03d56249072e62c5709.scope - libcontainer container 8bdaf655742c1714476df3901ec2c0a485e680f66e2ac03d56249072e62c5709. Sep 11 23:45:25.026225 containerd[1521]: time="2025-09-11T23:45:25.026186418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l687n,Uid:644e89ab-bd16-4d66-a7b8-c7a822cb5a60,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bdaf655742c1714476df3901ec2c0a485e680f66e2ac03d56249072e62c5709\"" Sep 11 23:45:25.030306 containerd[1521]: time="2025-09-11T23:45:25.030263805Z" level=info msg="CreateContainer within sandbox \"8bdaf655742c1714476df3901ec2c0a485e680f66e2ac03d56249072e62c5709\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 23:45:25.043109 containerd[1521]: time="2025-09-11T23:45:25.043077966Z" level=info msg="Container 62a2dd26024a48f70d4338855317294c794fa34bac8260cb0888732b60ac0699: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:25.051635 containerd[1521]: time="2025-09-11T23:45:25.051577707Z" level=info msg="CreateContainer within sandbox \"8bdaf655742c1714476df3901ec2c0a485e680f66e2ac03d56249072e62c5709\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62a2dd26024a48f70d4338855317294c794fa34bac8260cb0888732b60ac0699\"" Sep 11 23:45:25.052433 containerd[1521]: time="2025-09-11T23:45:25.052399977Z" level=info msg="StartContainer for \"62a2dd26024a48f70d4338855317294c794fa34bac8260cb0888732b60ac0699\"" Sep 11 23:45:25.053992 containerd[1521]: time="2025-09-11T23:45:25.053959787Z" level=info msg="connecting to shim 62a2dd26024a48f70d4338855317294c794fa34bac8260cb0888732b60ac0699" address="unix:///run/containerd/s/b5f9bc3a7fdf75d0457e500174ffb2af5d09539f2fd8ba4810fcd363c71e24a1" protocol=ttrpc version=3 Sep 11 23:45:25.079055 systemd[1]: Started cri-containerd-62a2dd26024a48f70d4338855317294c794fa34bac8260cb0888732b60ac0699.scope - libcontainer container 62a2dd26024a48f70d4338855317294c794fa34bac8260cb0888732b60ac0699. Sep 11 23:45:25.102906 systemd[1]: Created slice kubepods-besteffort-pod3737a07c_b669_4d35_9a7d_4e273e4aea35.slice - libcontainer container kubepods-besteffort-pod3737a07c_b669_4d35_9a7d_4e273e4aea35.slice. Sep 11 23:45:25.134162 containerd[1521]: time="2025-09-11T23:45:25.134119388Z" level=info msg="StartContainer for \"62a2dd26024a48f70d4338855317294c794fa34bac8260cb0888732b60ac0699\" returns successfully" Sep 11 23:45:25.173278 kubelet[2649]: I0911 23:45:25.173211 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3737a07c-b669-4d35-9a7d-4e273e4aea35-var-lib-calico\") pod \"tigera-operator-755d956888-lnrcf\" (UID: \"3737a07c-b669-4d35-9a7d-4e273e4aea35\") " pod="tigera-operator/tigera-operator-755d956888-lnrcf" Sep 11 23:45:25.173729 kubelet[2649]: I0911 23:45:25.173670 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45jl\" (UniqueName: \"kubernetes.io/projected/3737a07c-b669-4d35-9a7d-4e273e4aea35-kube-api-access-f45jl\") pod \"tigera-operator-755d956888-lnrcf\" (UID: \"3737a07c-b669-4d35-9a7d-4e273e4aea35\") " pod="tigera-operator/tigera-operator-755d956888-lnrcf" Sep 11 23:45:25.408650 containerd[1521]: time="2025-09-11T23:45:25.408231309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lnrcf,Uid:3737a07c-b669-4d35-9a7d-4e273e4aea35,Namespace:tigera-operator,Attempt:0,}" Sep 11 23:45:25.423523 containerd[1521]: time="2025-09-11T23:45:25.423485277Z" level=info msg="connecting to shim 13890f88ef86637ab320a8d66e3bfd320093027600bdfa994ecdd56deb694212" address="unix:///run/containerd/s/f38a0e95b36a4459196d929d97b903e231374de08666f3dabceb776ed5517a12" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:25.441024 systemd[1]: Started cri-containerd-13890f88ef86637ab320a8d66e3bfd320093027600bdfa994ecdd56deb694212.scope - libcontainer container 13890f88ef86637ab320a8d66e3bfd320093027600bdfa994ecdd56deb694212. Sep 11 23:45:25.473398 containerd[1521]: time="2025-09-11T23:45:25.473362053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lnrcf,Uid:3737a07c-b669-4d35-9a7d-4e273e4aea35,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"13890f88ef86637ab320a8d66e3bfd320093027600bdfa994ecdd56deb694212\"" Sep 11 23:45:25.477039 containerd[1521]: time="2025-09-11T23:45:25.477011263Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 23:45:27.001849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317890916.mount: Deactivated successfully. Sep 11 23:45:27.313953 containerd[1521]: time="2025-09-11T23:45:27.313821697Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:27.314898 containerd[1521]: time="2025-09-11T23:45:27.314821177Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 11 23:45:27.315641 containerd[1521]: time="2025-09-11T23:45:27.315582549Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:27.317678 containerd[1521]: time="2025-09-11T23:45:27.317640116Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:27.319144 containerd[1521]: time="2025-09-11T23:45:27.319030404Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.841988457s" Sep 11 23:45:27.319144 containerd[1521]: time="2025-09-11T23:45:27.319061568Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 11 23:45:27.322754 containerd[1521]: time="2025-09-11T23:45:27.322721848Z" level=info msg="CreateContainer within sandbox \"13890f88ef86637ab320a8d66e3bfd320093027600bdfa994ecdd56deb694212\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 23:45:27.328658 containerd[1521]: time="2025-09-11T23:45:27.328618318Z" level=info msg="Container 746ac2c59f9a6e2453ca1b7e8e0a11bb50d990e314ac5ce985de3133950d873a: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:27.334591 containerd[1521]: time="2025-09-11T23:45:27.334536471Z" level=info msg="CreateContainer within sandbox \"13890f88ef86637ab320a8d66e3bfd320093027600bdfa994ecdd56deb694212\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"746ac2c59f9a6e2453ca1b7e8e0a11bb50d990e314ac5ce985de3133950d873a\"" Sep 11 23:45:27.334941 containerd[1521]: time="2025-09-11T23:45:27.334917677Z" level=info msg="StartContainer for \"746ac2c59f9a6e2453ca1b7e8e0a11bb50d990e314ac5ce985de3133950d873a\"" Sep 11 23:45:27.335981 containerd[1521]: time="2025-09-11T23:45:27.335686969Z" level=info msg="connecting to shim 746ac2c59f9a6e2453ca1b7e8e0a11bb50d990e314ac5ce985de3133950d873a" address="unix:///run/containerd/s/f38a0e95b36a4459196d929d97b903e231374de08666f3dabceb776ed5517a12" protocol=ttrpc version=3 Sep 11 23:45:27.356028 systemd[1]: Started cri-containerd-746ac2c59f9a6e2453ca1b7e8e0a11bb50d990e314ac5ce985de3133950d873a.scope - libcontainer container 746ac2c59f9a6e2453ca1b7e8e0a11bb50d990e314ac5ce985de3133950d873a. Sep 11 23:45:27.379473 containerd[1521]: time="2025-09-11T23:45:27.379417714Z" level=info msg="StartContainer for \"746ac2c59f9a6e2453ca1b7e8e0a11bb50d990e314ac5ce985de3133950d873a\" returns successfully" Sep 11 23:45:27.876899 kubelet[2649]: I0911 23:45:27.876827 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l687n" podStartSLOduration=3.8768081949999997 podStartE2EDuration="3.876808195s" podCreationTimestamp="2025-09-11 23:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:45:25.874501227 +0000 UTC m=+8.129646282" watchObservedRunningTime="2025-09-11 23:45:27.876808195 +0000 UTC m=+10.131953250" Sep 11 23:45:27.877649 kubelet[2649]: I0911 23:45:27.877609 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-lnrcf" podStartSLOduration=1.032552009 podStartE2EDuration="2.877599451s" podCreationTimestamp="2025-09-11 23:45:25 +0000 UTC" firstStartedPulling="2025-09-11 23:45:25.475780858 +0000 UTC m=+7.730925913" lastFinishedPulling="2025-09-11 23:45:27.3208283 +0000 UTC m=+9.575973355" observedRunningTime="2025-09-11 23:45:27.877503319 +0000 UTC m=+10.132648374" watchObservedRunningTime="2025-09-11 23:45:27.877599451 +0000 UTC m=+10.132744506" Sep 11 23:45:32.523982 update_engine[1458]: I20250911 23:45:32.523909 1458 update_attempter.cc:509] Updating boot flags... Sep 11 23:45:32.704079 sudo[1715]: pam_unix(sudo:session): session closed for user root Sep 11 23:45:32.707889 sshd[1714]: Connection closed by 10.0.0.1 port 55492 Sep 11 23:45:32.708059 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Sep 11 23:45:32.719007 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:55492.service: Deactivated successfully. Sep 11 23:45:32.719182 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Sep 11 23:45:32.723563 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 23:45:32.724584 systemd[1]: session-7.scope: Consumed 6.224s CPU time, 218.2M memory peak. Sep 11 23:45:32.726407 systemd-logind[1456]: Removed session 7. Sep 11 23:45:36.175916 systemd[1]: Created slice kubepods-besteffort-pod515e95dc_ac0d_4d8b_a620_24f9b5376c4d.slice - libcontainer container kubepods-besteffort-pod515e95dc_ac0d_4d8b_a620_24f9b5376c4d.slice. Sep 11 23:45:36.250775 kubelet[2649]: I0911 23:45:36.250564 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tshq\" (UniqueName: \"kubernetes.io/projected/515e95dc-ac0d-4d8b-a620-24f9b5376c4d-kube-api-access-8tshq\") pod \"calico-typha-7697ff878f-kw8lt\" (UID: \"515e95dc-ac0d-4d8b-a620-24f9b5376c4d\") " pod="calico-system/calico-typha-7697ff878f-kw8lt" Sep 11 23:45:36.250775 kubelet[2649]: I0911 23:45:36.250678 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/515e95dc-ac0d-4d8b-a620-24f9b5376c4d-tigera-ca-bundle\") pod \"calico-typha-7697ff878f-kw8lt\" (UID: \"515e95dc-ac0d-4d8b-a620-24f9b5376c4d\") " pod="calico-system/calico-typha-7697ff878f-kw8lt" Sep 11 23:45:36.250775 kubelet[2649]: I0911 23:45:36.250723 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/515e95dc-ac0d-4d8b-a620-24f9b5376c4d-typha-certs\") pod \"calico-typha-7697ff878f-kw8lt\" (UID: \"515e95dc-ac0d-4d8b-a620-24f9b5376c4d\") " pod="calico-system/calico-typha-7697ff878f-kw8lt" Sep 11 23:45:36.435006 systemd[1]: Created slice kubepods-besteffort-pod2689a985_04a7_4fd9_898f_64bc5989ba33.slice - libcontainer container kubepods-besteffort-pod2689a985_04a7_4fd9_898f_64bc5989ba33.slice. Sep 11 23:45:36.451542 kubelet[2649]: I0911 23:45:36.451492 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-lib-modules\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451542 kubelet[2649]: I0911 23:45:36.451538 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2689a985-04a7-4fd9-898f-64bc5989ba33-node-certs\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451542 kubelet[2649]: I0911 23:45:36.451556 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-cni-log-dir\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451729 kubelet[2649]: I0911 23:45:36.451573 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-xtables-lock\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451729 kubelet[2649]: I0911 23:45:36.451589 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-cni-bin-dir\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451729 kubelet[2649]: I0911 23:45:36.451609 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-flexvol-driver-host\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451729 kubelet[2649]: I0911 23:45:36.451628 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2689a985-04a7-4fd9-898f-64bc5989ba33-tigera-ca-bundle\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451729 kubelet[2649]: I0911 23:45:36.451643 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-var-run-calico\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451837 kubelet[2649]: I0911 23:45:36.451671 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-cni-net-dir\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451837 kubelet[2649]: I0911 23:45:36.451687 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-policysync\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451837 kubelet[2649]: I0911 23:45:36.451701 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5st\" (UniqueName: \"kubernetes.io/projected/2689a985-04a7-4fd9-898f-64bc5989ba33-kube-api-access-wk5st\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.451837 kubelet[2649]: I0911 23:45:36.451718 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2689a985-04a7-4fd9-898f-64bc5989ba33-var-lib-calico\") pod \"calico-node-5rf8f\" (UID: \"2689a985-04a7-4fd9-898f-64bc5989ba33\") " pod="calico-system/calico-node-5rf8f" Sep 11 23:45:36.482516 containerd[1521]: time="2025-09-11T23:45:36.482467774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7697ff878f-kw8lt,Uid:515e95dc-ac0d-4d8b-a620-24f9b5376c4d,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:36.559059 kubelet[2649]: E0911 23:45:36.559029 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.559241 kubelet[2649]: W0911 23:45:36.559184 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.559241 kubelet[2649]: E0911 23:45:36.559210 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.569970 kubelet[2649]: E0911 23:45:36.569791 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.569970 kubelet[2649]: W0911 23:45:36.569817 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.569970 kubelet[2649]: E0911 23:45:36.569835 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.570313 containerd[1521]: time="2025-09-11T23:45:36.570277613Z" level=info msg="connecting to shim 98b8ab7587d3ab83d56d5bc29c304c9e148b0cece838478370a98b3fdb1e966e" address="unix:///run/containerd/s/562b4d265281efa6b5c0f65ec5c38cd45d9ae6530c338a6c23ecdadf5617e404" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:36.617099 systemd[1]: Started cri-containerd-98b8ab7587d3ab83d56d5bc29c304c9e148b0cece838478370a98b3fdb1e966e.scope - libcontainer container 98b8ab7587d3ab83d56d5bc29c304c9e148b0cece838478370a98b3fdb1e966e. Sep 11 23:45:36.677825 containerd[1521]: time="2025-09-11T23:45:36.677655900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7697ff878f-kw8lt,Uid:515e95dc-ac0d-4d8b-a620-24f9b5376c4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"98b8ab7587d3ab83d56d5bc29c304c9e148b0cece838478370a98b3fdb1e966e\"" Sep 11 23:45:36.687636 containerd[1521]: time="2025-09-11T23:45:36.687355478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 23:45:36.690191 kubelet[2649]: E0911 23:45:36.690148 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2v2z" podUID="2c50f3e7-d108-4dc8-8180-0e0ae420aa58" Sep 11 23:45:36.736469 kubelet[2649]: E0911 23:45:36.736441 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.736469 kubelet[2649]: W0911 23:45:36.736462 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.736622 kubelet[2649]: E0911 23:45:36.736480 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.736672 kubelet[2649]: E0911 23:45:36.736656 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.740274 kubelet[2649]: W0911 23:45:36.736668 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.740274 kubelet[2649]: E0911 23:45:36.740273 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.740497 kubelet[2649]: E0911 23:45:36.740480 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.740497 kubelet[2649]: W0911 23:45:36.740495 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.740549 kubelet[2649]: E0911 23:45:36.740506 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.740693 kubelet[2649]: E0911 23:45:36.740670 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.740693 kubelet[2649]: W0911 23:45:36.740691 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.740752 kubelet[2649]: E0911 23:45:36.740701 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.740886 kubelet[2649]: E0911 23:45:36.740870 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.740920 kubelet[2649]: W0911 23:45:36.740891 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.740920 kubelet[2649]: E0911 23:45:36.740899 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.741054 kubelet[2649]: E0911 23:45:36.741041 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.741054 kubelet[2649]: W0911 23:45:36.741053 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.741106 kubelet[2649]: E0911 23:45:36.741061 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.741204 kubelet[2649]: E0911 23:45:36.741192 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.741204 kubelet[2649]: W0911 23:45:36.741203 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.741245 kubelet[2649]: E0911 23:45:36.741210 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.741369 kubelet[2649]: E0911 23:45:36.741357 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.741369 kubelet[2649]: W0911 23:45:36.741368 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.741429 kubelet[2649]: E0911 23:45:36.741375 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.742475 kubelet[2649]: E0911 23:45:36.742263 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.742475 kubelet[2649]: W0911 23:45:36.742284 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.742475 kubelet[2649]: E0911 23:45:36.742302 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.742954 containerd[1521]: time="2025-09-11T23:45:36.742570358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5rf8f,Uid:2689a985-04a7-4fd9-898f-64bc5989ba33,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:36.743251 kubelet[2649]: E0911 23:45:36.743228 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.743251 kubelet[2649]: W0911 23:45:36.743244 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.743328 kubelet[2649]: E0911 23:45:36.743258 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.743555 kubelet[2649]: E0911 23:45:36.743528 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.743555 kubelet[2649]: W0911 23:45:36.743542 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.743555 kubelet[2649]: E0911 23:45:36.743551 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.743724 kubelet[2649]: E0911 23:45:36.743703 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.743724 kubelet[2649]: W0911 23:45:36.743717 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.743724 kubelet[2649]: E0911 23:45:36.743725 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.743978 kubelet[2649]: E0911 23:45:36.743875 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.743978 kubelet[2649]: W0911 23:45:36.743896 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.743978 kubelet[2649]: E0911 23:45:36.743905 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.744495 kubelet[2649]: E0911 23:45:36.744035 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.744495 kubelet[2649]: W0911 23:45:36.744045 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.744495 kubelet[2649]: E0911 23:45:36.744053 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.744495 kubelet[2649]: E0911 23:45:36.744217 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.744495 kubelet[2649]: W0911 23:45:36.744223 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.744495 kubelet[2649]: E0911 23:45:36.744231 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.744495 kubelet[2649]: E0911 23:45:36.744351 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.744495 kubelet[2649]: W0911 23:45:36.744358 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.744495 kubelet[2649]: E0911 23:45:36.744365 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.744495 kubelet[2649]: E0911 23:45:36.744501 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.744709 kubelet[2649]: W0911 23:45:36.744509 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.744709 kubelet[2649]: E0911 23:45:36.744516 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.744709 kubelet[2649]: E0911 23:45:36.744659 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.744709 kubelet[2649]: W0911 23:45:36.744666 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.744709 kubelet[2649]: E0911 23:45:36.744673 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.744941 kubelet[2649]: E0911 23:45:36.744817 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.744941 kubelet[2649]: W0911 23:45:36.744829 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.744941 kubelet[2649]: E0911 23:45:36.744838 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.745030 kubelet[2649]: E0911 23:45:36.744985 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.745030 kubelet[2649]: W0911 23:45:36.744993 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.745030 kubelet[2649]: E0911 23:45:36.745000 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.754053 kubelet[2649]: E0911 23:45:36.753983 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.754053 kubelet[2649]: W0911 23:45:36.754002 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.754053 kubelet[2649]: E0911 23:45:36.754026 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.754053 kubelet[2649]: I0911 23:45:36.754053 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqjx6\" (UniqueName: \"kubernetes.io/projected/2c50f3e7-d108-4dc8-8180-0e0ae420aa58-kube-api-access-qqjx6\") pod \"csi-node-driver-n2v2z\" (UID: \"2c50f3e7-d108-4dc8-8180-0e0ae420aa58\") " pod="calico-system/csi-node-driver-n2v2z" Sep 11 23:45:36.754912 kubelet[2649]: E0911 23:45:36.754638 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.754912 kubelet[2649]: W0911 23:45:36.754659 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.755092 kubelet[2649]: E0911 23:45:36.755067 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.755140 kubelet[2649]: I0911 23:45:36.755099 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2c50f3e7-d108-4dc8-8180-0e0ae420aa58-varrun\") pod \"csi-node-driver-n2v2z\" (UID: \"2c50f3e7-d108-4dc8-8180-0e0ae420aa58\") " pod="calico-system/csi-node-driver-n2v2z" Sep 11 23:45:36.755263 kubelet[2649]: E0911 23:45:36.755245 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.755263 kubelet[2649]: W0911 23:45:36.755260 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.755346 kubelet[2649]: E0911 23:45:36.755277 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.756008 kubelet[2649]: E0911 23:45:36.755986 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.756008 kubelet[2649]: W0911 23:45:36.756006 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.756061 kubelet[2649]: E0911 23:45:36.756028 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.756349 kubelet[2649]: E0911 23:45:36.756320 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.756349 kubelet[2649]: W0911 23:45:36.756335 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.756408 kubelet[2649]: E0911 23:45:36.756366 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.756408 kubelet[2649]: I0911 23:45:36.756384 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c50f3e7-d108-4dc8-8180-0e0ae420aa58-registration-dir\") pod \"csi-node-driver-n2v2z\" (UID: \"2c50f3e7-d108-4dc8-8180-0e0ae420aa58\") " pod="calico-system/csi-node-driver-n2v2z" Sep 11 23:45:36.756589 kubelet[2649]: E0911 23:45:36.756574 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.756589 kubelet[2649]: W0911 23:45:36.756588 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.756646 kubelet[2649]: E0911 23:45:36.756632 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.756673 kubelet[2649]: I0911 23:45:36.756649 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c50f3e7-d108-4dc8-8180-0e0ae420aa58-socket-dir\") pod \"csi-node-driver-n2v2z\" (UID: \"2c50f3e7-d108-4dc8-8180-0e0ae420aa58\") " pod="calico-system/csi-node-driver-n2v2z" Sep 11 23:45:36.756820 kubelet[2649]: E0911 23:45:36.756805 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.756820 kubelet[2649]: W0911 23:45:36.756818 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.756951 kubelet[2649]: E0911 23:45:36.756852 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.756991 kubelet[2649]: E0911 23:45:36.756977 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.756991 kubelet[2649]: W0911 23:45:36.756988 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.757204 kubelet[2649]: E0911 23:45:36.757135 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.757725 kubelet[2649]: E0911 23:45:36.757709 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.757800 kubelet[2649]: W0911 23:45:36.757777 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.758016 kubelet[2649]: E0911 23:45:36.757914 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.758124 kubelet[2649]: I0911 23:45:36.758105 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c50f3e7-d108-4dc8-8180-0e0ae420aa58-kubelet-dir\") pod \"csi-node-driver-n2v2z\" (UID: \"2c50f3e7-d108-4dc8-8180-0e0ae420aa58\") " pod="calico-system/csi-node-driver-n2v2z" Sep 11 23:45:36.758932 kubelet[2649]: E0911 23:45:36.758907 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.758932 kubelet[2649]: W0911 23:45:36.758929 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.759244 kubelet[2649]: E0911 23:45:36.758964 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.761088 kubelet[2649]: E0911 23:45:36.761062 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.761088 kubelet[2649]: W0911 23:45:36.761085 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.761206 kubelet[2649]: E0911 23:45:36.761100 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.761481 kubelet[2649]: E0911 23:45:36.761457 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.761481 kubelet[2649]: W0911 23:45:36.761477 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.761564 kubelet[2649]: E0911 23:45:36.761501 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.761713 kubelet[2649]: E0911 23:45:36.761697 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.761713 kubelet[2649]: W0911 23:45:36.761711 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.761713 kubelet[2649]: E0911 23:45:36.761720 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.762781 kubelet[2649]: E0911 23:45:36.762761 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.762781 kubelet[2649]: W0911 23:45:36.762779 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.762935 kubelet[2649]: E0911 23:45:36.762792 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.763657 kubelet[2649]: E0911 23:45:36.763624 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.763657 kubelet[2649]: W0911 23:45:36.763642 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.763657 kubelet[2649]: E0911 23:45:36.763656 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.764996 containerd[1521]: time="2025-09-11T23:45:36.764209764Z" level=info msg="connecting to shim 022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3" address="unix:///run/containerd/s/9be4b01572a3b3af7338ab9204e6b0a4e64db1bcf6aa66c1981a73b3d56e94be" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:36.808044 systemd[1]: Started cri-containerd-022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3.scope - libcontainer container 022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3. Sep 11 23:45:36.849529 containerd[1521]: time="2025-09-11T23:45:36.849476969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5rf8f,Uid:2689a985-04a7-4fd9-898f-64bc5989ba33,Namespace:calico-system,Attempt:0,} returns sandbox id \"022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3\"" Sep 11 23:45:36.862675 kubelet[2649]: E0911 23:45:36.862632 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.862675 kubelet[2649]: W0911 23:45:36.862658 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.862829 kubelet[2649]: E0911 23:45:36.862787 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.863138 kubelet[2649]: E0911 23:45:36.863122 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.863172 kubelet[2649]: W0911 23:45:36.863138 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.863172 kubelet[2649]: E0911 23:45:36.863159 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.863485 kubelet[2649]: E0911 23:45:36.863466 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.863485 kubelet[2649]: W0911 23:45:36.863484 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.863547 kubelet[2649]: E0911 23:45:36.863513 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.863714 kubelet[2649]: E0911 23:45:36.863700 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.863714 kubelet[2649]: W0911 23:45:36.863712 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.863769 kubelet[2649]: E0911 23:45:36.863726 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.863947 kubelet[2649]: E0911 23:45:36.863931 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.863947 kubelet[2649]: W0911 23:45:36.863944 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.864142 kubelet[2649]: E0911 23:45:36.863964 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.864339 kubelet[2649]: E0911 23:45:36.864322 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.864397 kubelet[2649]: W0911 23:45:36.864385 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.864468 kubelet[2649]: E0911 23:45:36.864456 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.865977 kubelet[2649]: E0911 23:45:36.865955 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.865977 kubelet[2649]: W0911 23:45:36.865978 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.866073 kubelet[2649]: E0911 23:45:36.865996 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.866215 kubelet[2649]: E0911 23:45:36.866198 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.866215 kubelet[2649]: W0911 23:45:36.866210 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.866215 kubelet[2649]: E0911 23:45:36.866238 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.866406 kubelet[2649]: E0911 23:45:36.866390 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.866406 kubelet[2649]: W0911 23:45:36.866402 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.866507 kubelet[2649]: E0911 23:45:36.866427 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.866559 kubelet[2649]: E0911 23:45:36.866545 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.866559 kubelet[2649]: W0911 23:45:36.866555 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.866669 kubelet[2649]: E0911 23:45:36.866579 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.866706 kubelet[2649]: E0911 23:45:36.866674 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.866706 kubelet[2649]: W0911 23:45:36.866681 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.866751 kubelet[2649]: E0911 23:45:36.866730 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.866888 kubelet[2649]: E0911 23:45:36.866862 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.866888 kubelet[2649]: W0911 23:45:36.866874 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.867133 kubelet[2649]: E0911 23:45:36.866903 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.867133 kubelet[2649]: E0911 23:45:36.867077 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.867133 kubelet[2649]: W0911 23:45:36.867089 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.867243 kubelet[2649]: E0911 23:45:36.867230 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.867459 kubelet[2649]: E0911 23:45:36.867445 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.867544 kubelet[2649]: W0911 23:45:36.867532 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.867606 kubelet[2649]: E0911 23:45:36.867596 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.867849 kubelet[2649]: E0911 23:45:36.867820 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.867849 kubelet[2649]: W0911 23:45:36.867833 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.867990 kubelet[2649]: E0911 23:45:36.867975 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.868251 kubelet[2649]: E0911 23:45:36.868237 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.868415 kubelet[2649]: W0911 23:45:36.868398 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.868495 kubelet[2649]: E0911 23:45:36.868484 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.869059 kubelet[2649]: E0911 23:45:36.869028 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.869059 kubelet[2649]: W0911 23:45:36.869045 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.869155 kubelet[2649]: E0911 23:45:36.869066 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.874820 kubelet[2649]: E0911 23:45:36.873938 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.874820 kubelet[2649]: W0911 23:45:36.873959 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.874820 kubelet[2649]: E0911 23:45:36.873998 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.874820 kubelet[2649]: E0911 23:45:36.874138 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.874820 kubelet[2649]: W0911 23:45:36.874146 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.874820 kubelet[2649]: E0911 23:45:36.874194 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.874820 kubelet[2649]: E0911 23:45:36.874309 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.874820 kubelet[2649]: W0911 23:45:36.874319 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.874820 kubelet[2649]: E0911 23:45:36.874365 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.874820 kubelet[2649]: E0911 23:45:36.874458 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.875114 kubelet[2649]: W0911 23:45:36.874466 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.875114 kubelet[2649]: E0911 23:45:36.874486 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.875114 kubelet[2649]: E0911 23:45:36.874671 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.875114 kubelet[2649]: W0911 23:45:36.874679 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.875114 kubelet[2649]: E0911 23:45:36.874703 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.875114 kubelet[2649]: E0911 23:45:36.874849 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.875114 kubelet[2649]: W0911 23:45:36.874858 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.875114 kubelet[2649]: E0911 23:45:36.874871 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.875114 kubelet[2649]: E0911 23:45:36.875081 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.875114 kubelet[2649]: W0911 23:45:36.875094 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.875308 kubelet[2649]: E0911 23:45:36.875112 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.875543 kubelet[2649]: E0911 23:45:36.875413 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.875543 kubelet[2649]: W0911 23:45:36.875427 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.875543 kubelet[2649]: E0911 23:45:36.875437 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:36.887656 kubelet[2649]: E0911 23:45:36.887626 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:36.887656 kubelet[2649]: W0911 23:45:36.887641 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:36.887656 kubelet[2649]: E0911 23:45:36.887656 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:37.789255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094731059.mount: Deactivated successfully. Sep 11 23:45:38.486816 containerd[1521]: time="2025-09-11T23:45:38.486762110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:38.487217 containerd[1521]: time="2025-09-11T23:45:38.487186180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 11 23:45:38.489296 containerd[1521]: time="2025-09-11T23:45:38.489260083Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:38.490374 containerd[1521]: time="2025-09-11T23:45:38.490346558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:38.491196 containerd[1521]: time="2025-09-11T23:45:38.490869275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.80329274s" Sep 11 23:45:38.491196 containerd[1521]: time="2025-09-11T23:45:38.490912638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 11 23:45:38.491870 containerd[1521]: time="2025-09-11T23:45:38.491826901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 23:45:38.507187 containerd[1521]: time="2025-09-11T23:45:38.507140761Z" level=info msg="CreateContainer within sandbox \"98b8ab7587d3ab83d56d5bc29c304c9e148b0cece838478370a98b3fdb1e966e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 23:45:38.522939 containerd[1521]: time="2025-09-11T23:45:38.520255269Z" level=info msg="Container 8f2f9a5dd4edc212cc2af290357ad446e4257cc9552a4b51308e46ea14dbb5cd: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:38.528703 containerd[1521]: time="2025-09-11T23:45:38.528627689Z" level=info msg="CreateContainer within sandbox \"98b8ab7587d3ab83d56d5bc29c304c9e148b0cece838478370a98b3fdb1e966e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8f2f9a5dd4edc212cc2af290357ad446e4257cc9552a4b51308e46ea14dbb5cd\"" Sep 11 23:45:38.529441 containerd[1521]: time="2025-09-11T23:45:38.529380501Z" level=info msg="StartContainer for \"8f2f9a5dd4edc212cc2af290357ad446e4257cc9552a4b51308e46ea14dbb5cd\"" Sep 11 23:45:38.530524 containerd[1521]: time="2025-09-11T23:45:38.530485938Z" level=info msg="connecting to shim 8f2f9a5dd4edc212cc2af290357ad446e4257cc9552a4b51308e46ea14dbb5cd" address="unix:///run/containerd/s/562b4d265281efa6b5c0f65ec5c38cd45d9ae6530c338a6c23ecdadf5617e404" protocol=ttrpc version=3 Sep 11 23:45:38.553077 systemd[1]: Started cri-containerd-8f2f9a5dd4edc212cc2af290357ad446e4257cc9552a4b51308e46ea14dbb5cd.scope - libcontainer container 8f2f9a5dd4edc212cc2af290357ad446e4257cc9552a4b51308e46ea14dbb5cd. Sep 11 23:45:38.590331 containerd[1521]: time="2025-09-11T23:45:38.590285639Z" level=info msg="StartContainer for \"8f2f9a5dd4edc212cc2af290357ad446e4257cc9552a4b51308e46ea14dbb5cd\" returns successfully" Sep 11 23:45:38.838084 kubelet[2649]: E0911 23:45:38.837944 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2v2z" podUID="2c50f3e7-d108-4dc8-8180-0e0ae420aa58" Sep 11 23:45:38.921067 kubelet[2649]: I0911 23:45:38.920965 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7697ff878f-kw8lt" podStartSLOduration=1.109427615 podStartE2EDuration="2.920947975s" podCreationTimestamp="2025-09-11 23:45:36 +0000 UTC" firstStartedPulling="2025-09-11 23:45:36.680187213 +0000 UTC m=+18.935332268" lastFinishedPulling="2025-09-11 23:45:38.491707573 +0000 UTC m=+20.746852628" observedRunningTime="2025-09-11 23:45:38.911428075 +0000 UTC m=+21.166573130" watchObservedRunningTime="2025-09-11 23:45:38.920947975 +0000 UTC m=+21.176093030" Sep 11 23:45:38.959389 kubelet[2649]: E0911 23:45:38.959362 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.959389 kubelet[2649]: W0911 23:45:38.959383 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.959389 kubelet[2649]: E0911 23:45:38.959402 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.959610 kubelet[2649]: E0911 23:45:38.959561 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.959650 kubelet[2649]: W0911 23:45:38.959568 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.959650 kubelet[2649]: E0911 23:45:38.959628 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.959847 kubelet[2649]: E0911 23:45:38.959833 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.959847 kubelet[2649]: W0911 23:45:38.959846 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.959931 kubelet[2649]: E0911 23:45:38.959862 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.960040 kubelet[2649]: E0911 23:45:38.960025 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.960075 kubelet[2649]: W0911 23:45:38.960049 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.960075 kubelet[2649]: E0911 23:45:38.960059 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.960235 kubelet[2649]: E0911 23:45:38.960224 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.960235 kubelet[2649]: W0911 23:45:38.960234 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.960298 kubelet[2649]: E0911 23:45:38.960242 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.960383 kubelet[2649]: E0911 23:45:38.960373 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.960383 kubelet[2649]: W0911 23:45:38.960382 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.960448 kubelet[2649]: E0911 23:45:38.960390 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.960530 kubelet[2649]: E0911 23:45:38.960521 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.960530 kubelet[2649]: W0911 23:45:38.960530 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.960589 kubelet[2649]: E0911 23:45:38.960537 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.960663 kubelet[2649]: E0911 23:45:38.960654 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.960663 kubelet[2649]: W0911 23:45:38.960663 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.960721 kubelet[2649]: E0911 23:45:38.960670 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.960800 kubelet[2649]: E0911 23:45:38.960790 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.960800 kubelet[2649]: W0911 23:45:38.960800 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.960867 kubelet[2649]: E0911 23:45:38.960808 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.961006 kubelet[2649]: E0911 23:45:38.960995 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.961006 kubelet[2649]: W0911 23:45:38.961005 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.961071 kubelet[2649]: E0911 23:45:38.961013 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.961143 kubelet[2649]: E0911 23:45:38.961133 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.961143 kubelet[2649]: W0911 23:45:38.961142 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.961197 kubelet[2649]: E0911 23:45:38.961149 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.961276 kubelet[2649]: E0911 23:45:38.961267 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.961276 kubelet[2649]: W0911 23:45:38.961276 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.961336 kubelet[2649]: E0911 23:45:38.961283 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.961408 kubelet[2649]: E0911 23:45:38.961399 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.961408 kubelet[2649]: W0911 23:45:38.961407 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.961464 kubelet[2649]: E0911 23:45:38.961414 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.961535 kubelet[2649]: E0911 23:45:38.961526 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.961535 kubelet[2649]: W0911 23:45:38.961535 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.961590 kubelet[2649]: E0911 23:45:38.961542 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.961661 kubelet[2649]: E0911 23:45:38.961651 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.961661 kubelet[2649]: W0911 23:45:38.961660 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.961711 kubelet[2649]: E0911 23:45:38.961667 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.983244 kubelet[2649]: E0911 23:45:38.983118 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.983244 kubelet[2649]: W0911 23:45:38.983136 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.983244 kubelet[2649]: E0911 23:45:38.983149 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.983392 kubelet[2649]: E0911 23:45:38.983308 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.983392 kubelet[2649]: W0911 23:45:38.983317 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.983392 kubelet[2649]: E0911 23:45:38.983332 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.983580 kubelet[2649]: E0911 23:45:38.983537 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.983580 kubelet[2649]: W0911 23:45:38.983555 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.983580 kubelet[2649]: E0911 23:45:38.983573 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.983712 kubelet[2649]: E0911 23:45:38.983700 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.983712 kubelet[2649]: W0911 23:45:38.983710 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.983785 kubelet[2649]: E0911 23:45:38.983722 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.983894 kubelet[2649]: E0911 23:45:38.983853 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.983894 kubelet[2649]: W0911 23:45:38.983863 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.983894 kubelet[2649]: E0911 23:45:38.983876 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.984046 kubelet[2649]: E0911 23:45:38.984035 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.984046 kubelet[2649]: W0911 23:45:38.984044 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.984217 kubelet[2649]: E0911 23:45:38.984057 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.984304 kubelet[2649]: E0911 23:45:38.984286 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.984358 kubelet[2649]: W0911 23:45:38.984347 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.984431 kubelet[2649]: E0911 23:45:38.984418 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.984603 kubelet[2649]: E0911 23:45:38.984583 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.984603 kubelet[2649]: W0911 23:45:38.984596 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.984663 kubelet[2649]: E0911 23:45:38.984611 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.984761 kubelet[2649]: E0911 23:45:38.984749 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.984761 kubelet[2649]: W0911 23:45:38.984759 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.984812 kubelet[2649]: E0911 23:45:38.984771 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.985004 kubelet[2649]: E0911 23:45:38.984973 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.985004 kubelet[2649]: W0911 23:45:38.984990 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.985073 kubelet[2649]: E0911 23:45:38.985006 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.985198 kubelet[2649]: E0911 23:45:38.985184 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.985198 kubelet[2649]: W0911 23:45:38.985195 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.985366 kubelet[2649]: E0911 23:45:38.985210 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.985449 kubelet[2649]: E0911 23:45:38.985434 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.985496 kubelet[2649]: W0911 23:45:38.985486 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.985559 kubelet[2649]: E0911 23:45:38.985548 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.985731 kubelet[2649]: E0911 23:45:38.985713 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.985731 kubelet[2649]: W0911 23:45:38.985727 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.985794 kubelet[2649]: E0911 23:45:38.985740 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.985893 kubelet[2649]: E0911 23:45:38.985871 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.985942 kubelet[2649]: W0911 23:45:38.985904 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.985942 kubelet[2649]: E0911 23:45:38.985918 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.986053 kubelet[2649]: E0911 23:45:38.986042 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.986053 kubelet[2649]: W0911 23:45:38.986051 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.986225 kubelet[2649]: E0911 23:45:38.986064 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.986302 kubelet[2649]: E0911 23:45:38.986289 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.986350 kubelet[2649]: W0911 23:45:38.986340 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.986414 kubelet[2649]: E0911 23:45:38.986401 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.986620 kubelet[2649]: E0911 23:45:38.986602 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.986620 kubelet[2649]: W0911 23:45:38.986613 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.986676 kubelet[2649]: E0911 23:45:38.986621 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:38.987002 kubelet[2649]: E0911 23:45:38.986984 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:45:38.987002 kubelet[2649]: W0911 23:45:38.986998 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:45:38.987063 kubelet[2649]: E0911 23:45:38.987008 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:45:39.481428 containerd[1521]: time="2025-09-11T23:45:39.481383295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:39.481920 containerd[1521]: time="2025-09-11T23:45:39.481889609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 11 23:45:39.482989 containerd[1521]: time="2025-09-11T23:45:39.482941518Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:39.484967 containerd[1521]: time="2025-09-11T23:45:39.484933250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:39.486052 containerd[1521]: time="2025-09-11T23:45:39.485937316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 994.058892ms" Sep 11 23:45:39.486052 containerd[1521]: time="2025-09-11T23:45:39.485967158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 11 23:45:39.488348 containerd[1521]: time="2025-09-11T23:45:39.488285032Z" level=info msg="CreateContainer within sandbox \"022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 23:45:39.496485 containerd[1521]: time="2025-09-11T23:45:39.495447185Z" level=info msg="Container a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:39.503443 containerd[1521]: time="2025-09-11T23:45:39.503407032Z" level=info msg="CreateContainer within sandbox \"022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f\"" Sep 11 23:45:39.504073 containerd[1521]: time="2025-09-11T23:45:39.504031433Z" level=info msg="StartContainer for \"a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f\"" Sep 11 23:45:39.506586 containerd[1521]: time="2025-09-11T23:45:39.506548720Z" level=info msg="connecting to shim a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f" address="unix:///run/containerd/s/9be4b01572a3b3af7338ab9204e6b0a4e64db1bcf6aa66c1981a73b3d56e94be" protocol=ttrpc version=3 Sep 11 23:45:39.541095 systemd[1]: Started cri-containerd-a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f.scope - libcontainer container a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f. Sep 11 23:45:39.586945 systemd[1]: cri-containerd-a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f.scope: Deactivated successfully. Sep 11 23:45:39.587389 systemd[1]: cri-containerd-a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f.scope: Consumed 28ms CPU time, 6.2M memory peak, 4.5M written to disk. Sep 11 23:45:39.604523 containerd[1521]: time="2025-09-11T23:45:39.604481918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f\" id:\"a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f\" pid:3348 exited_at:{seconds:1757634339 nanos:604030808}" Sep 11 23:45:39.637239 containerd[1521]: time="2025-09-11T23:45:39.637194482Z" level=info msg="StartContainer for \"a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f\" returns successfully" Sep 11 23:45:39.640197 containerd[1521]: time="2025-09-11T23:45:39.640161598Z" level=info msg="received exit event container_id:\"a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f\" id:\"a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f\" pid:3348 exited_at:{seconds:1757634339 nanos:604030808}" Sep 11 23:45:39.691006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a67a932ed50a1980bd7b6a97282712fbf88d429b923ef9b7d5097032ba51f76f-rootfs.mount: Deactivated successfully. Sep 11 23:45:39.900418 containerd[1521]: time="2025-09-11T23:45:39.900307927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 23:45:40.838226 kubelet[2649]: E0911 23:45:40.838162 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2v2z" podUID="2c50f3e7-d108-4dc8-8180-0e0ae420aa58" Sep 11 23:45:42.838140 kubelet[2649]: E0911 23:45:42.838094 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2v2z" podUID="2c50f3e7-d108-4dc8-8180-0e0ae420aa58" Sep 11 23:45:43.377672 containerd[1521]: time="2025-09-11T23:45:43.377136436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:43.377672 containerd[1521]: time="2025-09-11T23:45:43.377671666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 11 23:45:43.378588 containerd[1521]: time="2025-09-11T23:45:43.378561875Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:43.380732 containerd[1521]: time="2025-09-11T23:45:43.380697194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:43.381280 containerd[1521]: time="2025-09-11T23:45:43.381246344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.480510389s" Sep 11 23:45:43.381319 containerd[1521]: time="2025-09-11T23:45:43.381278666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 11 23:45:43.383633 containerd[1521]: time="2025-09-11T23:45:43.383598195Z" level=info msg="CreateContainer within sandbox \"022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 23:45:43.391292 containerd[1521]: time="2025-09-11T23:45:43.391249620Z" level=info msg="Container 20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:43.398269 containerd[1521]: time="2025-09-11T23:45:43.398223328Z" level=info msg="CreateContainer within sandbox \"022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd\"" Sep 11 23:45:43.398657 containerd[1521]: time="2025-09-11T23:45:43.398633311Z" level=info msg="StartContainer for \"20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd\"" Sep 11 23:45:43.400490 containerd[1521]: time="2025-09-11T23:45:43.400451492Z" level=info msg="connecting to shim 20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd" address="unix:///run/containerd/s/9be4b01572a3b3af7338ab9204e6b0a4e64db1bcf6aa66c1981a73b3d56e94be" protocol=ttrpc version=3 Sep 11 23:45:43.427135 systemd[1]: Started cri-containerd-20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd.scope - libcontainer container 20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd. Sep 11 23:45:43.458293 containerd[1521]: time="2025-09-11T23:45:43.458242625Z" level=info msg="StartContainer for \"20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd\" returns successfully" Sep 11 23:45:44.042600 systemd[1]: cri-containerd-20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd.scope: Deactivated successfully. Sep 11 23:45:44.042907 systemd[1]: cri-containerd-20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd.scope: Consumed 443ms CPU time, 177.1M memory peak, 3.9M read from disk, 165.8M written to disk. Sep 11 23:45:44.046897 containerd[1521]: time="2025-09-11T23:45:44.046814249Z" level=info msg="received exit event container_id:\"20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd\" id:\"20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd\" pid:3404 exited_at:{seconds:1757634344 nanos:46548555}" Sep 11 23:45:44.047101 containerd[1521]: time="2025-09-11T23:45:44.046864371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd\" id:\"20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd\" pid:3404 exited_at:{seconds:1757634344 nanos:46548555}" Sep 11 23:45:44.065912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20c77390b76efa25c099bf8832086c356db5dfc16c3fb7ea9172e79d6a9a1cfd-rootfs.mount: Deactivated successfully. Sep 11 23:45:44.131189 kubelet[2649]: I0911 23:45:44.131155 2649 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 23:45:44.176478 systemd[1]: Created slice kubepods-burstable-podbd6c0474_4722_46bb_92d1_451df3477b61.slice - libcontainer container kubepods-burstable-podbd6c0474_4722_46bb_92d1_451df3477b61.slice. Sep 11 23:45:44.197470 systemd[1]: Created slice kubepods-burstable-pod15cf90eb_249b_4f72_bbf9_ef8ea7c68422.slice - libcontainer container kubepods-burstable-pod15cf90eb_249b_4f72_bbf9_ef8ea7c68422.slice. Sep 11 23:45:44.203245 systemd[1]: Created slice kubepods-besteffort-podc2ab5d0a_092a_4cff_bd38_6146dd672da3.slice - libcontainer container kubepods-besteffort-podc2ab5d0a_092a_4cff_bd38_6146dd672da3.slice. Sep 11 23:45:44.212867 systemd[1]: Created slice kubepods-besteffort-pod6a07ef18_af1f_4f4a_af12_97ce702b3ff4.slice - libcontainer container kubepods-besteffort-pod6a07ef18_af1f_4f4a_af12_97ce702b3ff4.slice. Sep 11 23:45:44.217066 kubelet[2649]: I0911 23:45:44.217033 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdt8t\" (UniqueName: \"kubernetes.io/projected/c2ab5d0a-092a-4cff-bd38-6146dd672da3-kube-api-access-xdt8t\") pod \"calico-kube-controllers-74c68f96f4-6lq2g\" (UID: \"c2ab5d0a-092a-4cff-bd38-6146dd672da3\") " pod="calico-system/calico-kube-controllers-74c68f96f4-6lq2g" Sep 11 23:45:44.217066 kubelet[2649]: I0911 23:45:44.217070 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkkwn\" (UniqueName: \"kubernetes.io/projected/15cf90eb-249b-4f72-bbf9-ef8ea7c68422-kube-api-access-tkkwn\") pod \"coredns-668d6bf9bc-bkb89\" (UID: \"15cf90eb-249b-4f72-bbf9-ef8ea7c68422\") " pod="kube-system/coredns-668d6bf9bc-bkb89" Sep 11 23:45:44.217195 kubelet[2649]: I0911 23:45:44.217088 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7ff5acd-1257-4129-9bce-3657553f282b-calico-apiserver-certs\") pod \"calico-apiserver-5d7f7f4669-km7p8\" (UID: \"b7ff5acd-1257-4129-9bce-3657553f282b\") " pod="calico-apiserver/calico-apiserver-5d7f7f4669-km7p8" Sep 11 23:45:44.217195 kubelet[2649]: I0911 23:45:44.217104 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a07ef18-af1f-4f4a-af12-97ce702b3ff4-calico-apiserver-certs\") pod \"calico-apiserver-5d7f7f4669-qd5gx\" (UID: \"6a07ef18-af1f-4f4a-af12-97ce702b3ff4\") " pod="calico-apiserver/calico-apiserver-5d7f7f4669-qd5gx" Sep 11 23:45:44.217195 kubelet[2649]: I0911 23:45:44.217122 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15cf90eb-249b-4f72-bbf9-ef8ea7c68422-config-volume\") pod \"coredns-668d6bf9bc-bkb89\" (UID: \"15cf90eb-249b-4f72-bbf9-ef8ea7c68422\") " pod="kube-system/coredns-668d6bf9bc-bkb89" Sep 11 23:45:44.217195 kubelet[2649]: I0911 23:45:44.217151 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9cn\" (UniqueName: \"kubernetes.io/projected/b7ff5acd-1257-4129-9bce-3657553f282b-kube-api-access-7l9cn\") pod \"calico-apiserver-5d7f7f4669-km7p8\" (UID: \"b7ff5acd-1257-4129-9bce-3657553f282b\") " pod="calico-apiserver/calico-apiserver-5d7f7f4669-km7p8" Sep 11 23:45:44.217195 kubelet[2649]: I0911 23:45:44.217166 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26419464-e9d2-4216-9042-f037752cbf30-config\") pod \"goldmane-54d579b49d-879ns\" (UID: \"26419464-e9d2-4216-9042-f037752cbf30\") " pod="calico-system/goldmane-54d579b49d-879ns" Sep 11 23:45:44.217350 kubelet[2649]: I0911 23:45:44.217181 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxxl\" (UniqueName: \"kubernetes.io/projected/bd6c0474-4722-46bb-92d1-451df3477b61-kube-api-access-jzxxl\") pod \"coredns-668d6bf9bc-cgpqc\" (UID: \"bd6c0474-4722-46bb-92d1-451df3477b61\") " pod="kube-system/coredns-668d6bf9bc-cgpqc" Sep 11 23:45:44.217350 kubelet[2649]: I0911 23:45:44.217197 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnqr\" (UniqueName: \"kubernetes.io/projected/26419464-e9d2-4216-9042-f037752cbf30-kube-api-access-kbnqr\") pod \"goldmane-54d579b49d-879ns\" (UID: \"26419464-e9d2-4216-9042-f037752cbf30\") " pod="calico-system/goldmane-54d579b49d-879ns" Sep 11 23:45:44.217350 kubelet[2649]: I0911 23:45:44.217224 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/26419464-e9d2-4216-9042-f037752cbf30-goldmane-key-pair\") pod \"goldmane-54d579b49d-879ns\" (UID: \"26419464-e9d2-4216-9042-f037752cbf30\") " pod="calico-system/goldmane-54d579b49d-879ns" Sep 11 23:45:44.217350 kubelet[2649]: I0911 23:45:44.217243 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6nt9\" (UniqueName: \"kubernetes.io/projected/6a07ef18-af1f-4f4a-af12-97ce702b3ff4-kube-api-access-s6nt9\") pod \"calico-apiserver-5d7f7f4669-qd5gx\" (UID: \"6a07ef18-af1f-4f4a-af12-97ce702b3ff4\") " pod="calico-apiserver/calico-apiserver-5d7f7f4669-qd5gx" Sep 11 23:45:44.217350 kubelet[2649]: I0911 23:45:44.217260 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-ca-bundle\") pod \"whisker-5dbd8b59c-cg7b5\" (UID: \"3c7be61c-2b7b-4b28-a924-a8d64028218e\") " pod="calico-system/whisker-5dbd8b59c-cg7b5" Sep 11 23:45:44.217455 kubelet[2649]: I0911 23:45:44.217281 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6c0474-4722-46bb-92d1-451df3477b61-config-volume\") pod \"coredns-668d6bf9bc-cgpqc\" (UID: \"bd6c0474-4722-46bb-92d1-451df3477b61\") " pod="kube-system/coredns-668d6bf9bc-cgpqc" Sep 11 23:45:44.218989 kubelet[2649]: I0911 23:45:44.217336 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2ab5d0a-092a-4cff-bd38-6146dd672da3-tigera-ca-bundle\") pod \"calico-kube-controllers-74c68f96f4-6lq2g\" (UID: \"c2ab5d0a-092a-4cff-bd38-6146dd672da3\") " pod="calico-system/calico-kube-controllers-74c68f96f4-6lq2g" Sep 11 23:45:44.218989 kubelet[2649]: I0911 23:45:44.217546 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26419464-e9d2-4216-9042-f037752cbf30-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-879ns\" (UID: \"26419464-e9d2-4216-9042-f037752cbf30\") " pod="calico-system/goldmane-54d579b49d-879ns" Sep 11 23:45:44.218989 kubelet[2649]: I0911 23:45:44.217564 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8pxc\" (UniqueName: \"kubernetes.io/projected/3c7be61c-2b7b-4b28-a924-a8d64028218e-kube-api-access-v8pxc\") pod \"whisker-5dbd8b59c-cg7b5\" (UID: \"3c7be61c-2b7b-4b28-a924-a8d64028218e\") " pod="calico-system/whisker-5dbd8b59c-cg7b5" Sep 11 23:45:44.218989 kubelet[2649]: I0911 23:45:44.217596 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-backend-key-pair\") pod \"whisker-5dbd8b59c-cg7b5\" (UID: \"3c7be61c-2b7b-4b28-a924-a8d64028218e\") " pod="calico-system/whisker-5dbd8b59c-cg7b5" Sep 11 23:45:44.218333 systemd[1]: Created slice kubepods-besteffort-podb7ff5acd_1257_4129_9bce_3657553f282b.slice - libcontainer container kubepods-besteffort-podb7ff5acd_1257_4129_9bce_3657553f282b.slice. Sep 11 23:45:44.224449 systemd[1]: Created slice kubepods-besteffort-pod26419464_e9d2_4216_9042_f037752cbf30.slice - libcontainer container kubepods-besteffort-pod26419464_e9d2_4216_9042_f037752cbf30.slice. Sep 11 23:45:44.231235 systemd[1]: Created slice kubepods-besteffort-pod3c7be61c_2b7b_4b28_a924_a8d64028218e.slice - libcontainer container kubepods-besteffort-pod3c7be61c_2b7b_4b28_a924_a8d64028218e.slice. Sep 11 23:45:44.482874 containerd[1521]: time="2025-09-11T23:45:44.482834714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgpqc,Uid:bd6c0474-4722-46bb-92d1-451df3477b61,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:44.501595 containerd[1521]: time="2025-09-11T23:45:44.501200694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkb89,Uid:15cf90eb-249b-4f72-bbf9-ef8ea7c68422,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:44.508076 containerd[1521]: time="2025-09-11T23:45:44.507896131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c68f96f4-6lq2g,Uid:c2ab5d0a-092a-4cff-bd38-6146dd672da3,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:44.518137 containerd[1521]: time="2025-09-11T23:45:44.518065234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-qd5gx,Uid:6a07ef18-af1f-4f4a-af12-97ce702b3ff4,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:45:44.522909 containerd[1521]: time="2025-09-11T23:45:44.522845649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-km7p8,Uid:b7ff5acd-1257-4129-9bce-3657553f282b,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:45:44.529634 containerd[1521]: time="2025-09-11T23:45:44.529319514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-879ns,Uid:26419464-e9d2-4216-9042-f037752cbf30,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:44.535846 containerd[1521]: time="2025-09-11T23:45:44.535778539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbd8b59c-cg7b5,Uid:3c7be61c-2b7b-4b28-a924-a8d64028218e,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:44.599640 containerd[1521]: time="2025-09-11T23:45:44.599534101Z" level=error msg="Failed to destroy network for sandbox \"095582a6694525fbbbb71bf93538badbd4621083678302ea6758885fb28ce94e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.601961 containerd[1521]: time="2025-09-11T23:45:44.601921428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c68f96f4-6lq2g,Uid:c2ab5d0a-092a-4cff-bd38-6146dd672da3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"095582a6694525fbbbb71bf93538badbd4621083678302ea6758885fb28ce94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.603443 kubelet[2649]: E0911 23:45:44.603384 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095582a6694525fbbbb71bf93538badbd4621083678302ea6758885fb28ce94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.605703 kubelet[2649]: E0911 23:45:44.605660 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095582a6694525fbbbb71bf93538badbd4621083678302ea6758885fb28ce94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c68f96f4-6lq2g" Sep 11 23:45:44.605819 kubelet[2649]: E0911 23:45:44.605708 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095582a6694525fbbbb71bf93538badbd4621083678302ea6758885fb28ce94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c68f96f4-6lq2g" Sep 11 23:45:44.606958 kubelet[2649]: E0911 23:45:44.605858 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c68f96f4-6lq2g_calico-system(c2ab5d0a-092a-4cff-bd38-6146dd672da3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c68f96f4-6lq2g_calico-system(c2ab5d0a-092a-4cff-bd38-6146dd672da3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"095582a6694525fbbbb71bf93538badbd4621083678302ea6758885fb28ce94e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c68f96f4-6lq2g" podUID="c2ab5d0a-092a-4cff-bd38-6146dd672da3" Sep 11 23:45:44.611856 containerd[1521]: time="2025-09-11T23:45:44.611741872Z" level=error msg="Failed to destroy network for sandbox \"f517287532624877c7d15e105c762d966261c58fa1856667bac2304a24b8cca1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.614049 containerd[1521]: time="2025-09-11T23:45:44.614004673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-qd5gx,Uid:6a07ef18-af1f-4f4a-af12-97ce702b3ff4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f517287532624877c7d15e105c762d966261c58fa1856667bac2304a24b8cca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.614251 kubelet[2649]: E0911 23:45:44.614210 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f517287532624877c7d15e105c762d966261c58fa1856667bac2304a24b8cca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.614982 kubelet[2649]: E0911 23:45:44.614940 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f517287532624877c7d15e105c762d966261c58fa1856667bac2304a24b8cca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f7f4669-qd5gx" Sep 11 23:45:44.615140 kubelet[2649]: E0911 23:45:44.614985 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f517287532624877c7d15e105c762d966261c58fa1856667bac2304a24b8cca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f7f4669-qd5gx" Sep 11 23:45:44.615140 kubelet[2649]: E0911 23:45:44.615028 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f7f4669-qd5gx_calico-apiserver(6a07ef18-af1f-4f4a-af12-97ce702b3ff4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f7f4669-qd5gx_calico-apiserver(6a07ef18-af1f-4f4a-af12-97ce702b3ff4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f517287532624877c7d15e105c762d966261c58fa1856667bac2304a24b8cca1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f7f4669-qd5gx" podUID="6a07ef18-af1f-4f4a-af12-97ce702b3ff4" Sep 11 23:45:44.622402 containerd[1521]: time="2025-09-11T23:45:44.622337517Z" level=error msg="Failed to destroy network for sandbox \"56d2d5970a828b7137cfcfd322e4811ab92b6857c7d30984fcacc0a83cea09f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.623677 containerd[1521]: time="2025-09-11T23:45:44.623617906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-879ns,Uid:26419464-e9d2-4216-9042-f037752cbf30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56d2d5970a828b7137cfcfd322e4811ab92b6857c7d30984fcacc0a83cea09f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.623909 kubelet[2649]: E0911 23:45:44.623835 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56d2d5970a828b7137cfcfd322e4811ab92b6857c7d30984fcacc0a83cea09f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.623960 kubelet[2649]: E0911 23:45:44.623915 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56d2d5970a828b7137cfcfd322e4811ab92b6857c7d30984fcacc0a83cea09f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-879ns" Sep 11 23:45:44.623960 kubelet[2649]: E0911 23:45:44.623945 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56d2d5970a828b7137cfcfd322e4811ab92b6857c7d30984fcacc0a83cea09f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-879ns" Sep 11 23:45:44.624027 kubelet[2649]: E0911 23:45:44.623994 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-879ns_calico-system(26419464-e9d2-4216-9042-f037752cbf30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-879ns_calico-system(26419464-e9d2-4216-9042-f037752cbf30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56d2d5970a828b7137cfcfd322e4811ab92b6857c7d30984fcacc0a83cea09f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-879ns" podUID="26419464-e9d2-4216-9042-f037752cbf30" Sep 11 23:45:44.631304 containerd[1521]: time="2025-09-11T23:45:44.631247153Z" level=error msg="Failed to destroy network for sandbox \"2176e7768d9f474621fa90eb9b390eef48e9afa22d3fd5a2ae070034689fe29b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.632238 containerd[1521]: time="2025-09-11T23:45:44.632175922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-km7p8,Uid:b7ff5acd-1257-4129-9bce-3657553f282b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2176e7768d9f474621fa90eb9b390eef48e9afa22d3fd5a2ae070034689fe29b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.632478 kubelet[2649]: E0911 23:45:44.632426 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2176e7768d9f474621fa90eb9b390eef48e9afa22d3fd5a2ae070034689fe29b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.632539 kubelet[2649]: E0911 23:45:44.632494 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2176e7768d9f474621fa90eb9b390eef48e9afa22d3fd5a2ae070034689fe29b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f7f4669-km7p8" Sep 11 23:45:44.632539 kubelet[2649]: E0911 23:45:44.632514 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2176e7768d9f474621fa90eb9b390eef48e9afa22d3fd5a2ae070034689fe29b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f7f4669-km7p8" Sep 11 23:45:44.632707 kubelet[2649]: E0911 23:45:44.632556 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f7f4669-km7p8_calico-apiserver(b7ff5acd-1257-4129-9bce-3657553f282b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f7f4669-km7p8_calico-apiserver(b7ff5acd-1257-4129-9bce-3657553f282b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2176e7768d9f474621fa90eb9b390eef48e9afa22d3fd5a2ae070034689fe29b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f7f4669-km7p8" podUID="b7ff5acd-1257-4129-9bce-3657553f282b" Sep 11 23:45:44.634570 containerd[1521]: time="2025-09-11T23:45:44.634438483Z" level=error msg="Failed to destroy network for sandbox \"b0311358e6d3960d04fec4f2d76db3e0fc6a59f65cd3633cc2ac91e52d84df65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.634851 containerd[1521]: time="2025-09-11T23:45:44.634820223Z" level=error msg="Failed to destroy network for sandbox \"2a2b149166f2c867ec2cc320859ee4d059b8c3a8415c956e43e5d40812778ecd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.635749 containerd[1521]: time="2025-09-11T23:45:44.635461498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkb89,Uid:15cf90eb-249b-4f72-bbf9-ef8ea7c68422,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0311358e6d3960d04fec4f2d76db3e0fc6a59f65cd3633cc2ac91e52d84df65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.636010 kubelet[2649]: E0911 23:45:44.635966 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0311358e6d3960d04fec4f2d76db3e0fc6a59f65cd3633cc2ac91e52d84df65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.636065 kubelet[2649]: E0911 23:45:44.636020 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0311358e6d3960d04fec4f2d76db3e0fc6a59f65cd3633cc2ac91e52d84df65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bkb89" Sep 11 23:45:44.636065 kubelet[2649]: E0911 23:45:44.636037 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0311358e6d3960d04fec4f2d76db3e0fc6a59f65cd3633cc2ac91e52d84df65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bkb89" Sep 11 23:45:44.636222 kubelet[2649]: E0911 23:45:44.636072 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bkb89_kube-system(15cf90eb-249b-4f72-bbf9-ef8ea7c68422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bkb89_kube-system(15cf90eb-249b-4f72-bbf9-ef8ea7c68422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0311358e6d3960d04fec4f2d76db3e0fc6a59f65cd3633cc2ac91e52d84df65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bkb89" podUID="15cf90eb-249b-4f72-bbf9-ef8ea7c68422" Sep 11 23:45:44.637312 containerd[1521]: time="2025-09-11T23:45:44.636638800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgpqc,Uid:bd6c0474-4722-46bb-92d1-451df3477b61,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2b149166f2c867ec2cc320859ee4d059b8c3a8415c956e43e5d40812778ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.637604 kubelet[2649]: E0911 23:45:44.636792 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2b149166f2c867ec2cc320859ee4d059b8c3a8415c956e43e5d40812778ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.637604 kubelet[2649]: E0911 23:45:44.636822 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2b149166f2c867ec2cc320859ee4d059b8c3a8415c956e43e5d40812778ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cgpqc" Sep 11 23:45:44.637604 kubelet[2649]: E0911 23:45:44.636836 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2b149166f2c867ec2cc320859ee4d059b8c3a8415c956e43e5d40812778ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cgpqc" Sep 11 23:45:44.637689 kubelet[2649]: E0911 23:45:44.636861 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cgpqc_kube-system(bd6c0474-4722-46bb-92d1-451df3477b61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cgpqc_kube-system(bd6c0474-4722-46bb-92d1-451df3477b61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a2b149166f2c867ec2cc320859ee4d059b8c3a8415c956e43e5d40812778ecd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cgpqc" podUID="bd6c0474-4722-46bb-92d1-451df3477b61" Sep 11 23:45:44.642332 containerd[1521]: time="2025-09-11T23:45:44.642301422Z" level=error msg="Failed to destroy network for sandbox \"c10e0b95e9e9edb32eb6c01274a64a0829df52d54c662d2b3e1d6cebbee79179\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.643253 containerd[1521]: time="2025-09-11T23:45:44.643171229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbd8b59c-cg7b5,Uid:3c7be61c-2b7b-4b28-a924-a8d64028218e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10e0b95e9e9edb32eb6c01274a64a0829df52d54c662d2b3e1d6cebbee79179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.643358 kubelet[2649]: E0911 23:45:44.643330 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10e0b95e9e9edb32eb6c01274a64a0829df52d54c662d2b3e1d6cebbee79179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.643408 kubelet[2649]: E0911 23:45:44.643390 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10e0b95e9e9edb32eb6c01274a64a0829df52d54c662d2b3e1d6cebbee79179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dbd8b59c-cg7b5" Sep 11 23:45:44.643450 kubelet[2649]: E0911 23:45:44.643412 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10e0b95e9e9edb32eb6c01274a64a0829df52d54c662d2b3e1d6cebbee79179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dbd8b59c-cg7b5" Sep 11 23:45:44.643474 kubelet[2649]: E0911 23:45:44.643444 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dbd8b59c-cg7b5_calico-system(3c7be61c-2b7b-4b28-a924-a8d64028218e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dbd8b59c-cg7b5_calico-system(3c7be61c-2b7b-4b28-a924-a8d64028218e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c10e0b95e9e9edb32eb6c01274a64a0829df52d54c662d2b3e1d6cebbee79179\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dbd8b59c-cg7b5" podUID="3c7be61c-2b7b-4b28-a924-a8d64028218e" Sep 11 23:45:44.842544 systemd[1]: Created slice kubepods-besteffort-pod2c50f3e7_d108_4dc8_8180_0e0ae420aa58.slice - libcontainer container kubepods-besteffort-pod2c50f3e7_d108_4dc8_8180_0e0ae420aa58.slice. Sep 11 23:45:44.845568 containerd[1521]: time="2025-09-11T23:45:44.844899993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2v2z,Uid:2c50f3e7-d108-4dc8-8180-0e0ae420aa58,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:44.886035 containerd[1521]: time="2025-09-11T23:45:44.885986425Z" level=error msg="Failed to destroy network for sandbox \"8eda3ccea61e29d490f08930cbf2e806a8c82ce01aaa99ee44098e316b4a026a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.887191 containerd[1521]: time="2025-09-11T23:45:44.887121485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2v2z,Uid:2c50f3e7-d108-4dc8-8180-0e0ae420aa58,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eda3ccea61e29d490f08930cbf2e806a8c82ce01aaa99ee44098e316b4a026a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.887403 kubelet[2649]: E0911 23:45:44.887360 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eda3ccea61e29d490f08930cbf2e806a8c82ce01aaa99ee44098e316b4a026a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:45:44.887457 kubelet[2649]: E0911 23:45:44.887427 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eda3ccea61e29d490f08930cbf2e806a8c82ce01aaa99ee44098e316b4a026a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2v2z" Sep 11 23:45:44.887457 kubelet[2649]: E0911 23:45:44.887447 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eda3ccea61e29d490f08930cbf2e806a8c82ce01aaa99ee44098e316b4a026a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2v2z" Sep 11 23:45:44.887518 kubelet[2649]: E0911 23:45:44.887489 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n2v2z_calico-system(2c50f3e7-d108-4dc8-8180-0e0ae420aa58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n2v2z_calico-system(2c50f3e7-d108-4dc8-8180-0e0ae420aa58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eda3ccea61e29d490f08930cbf2e806a8c82ce01aaa99ee44098e316b4a026a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n2v2z" podUID="2c50f3e7-d108-4dc8-8180-0e0ae420aa58" Sep 11 23:45:44.921463 containerd[1521]: time="2025-09-11T23:45:44.921193984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 23:45:45.392568 systemd[1]: run-netns-cni\x2d5b4d4f67\x2dcd23\x2df07d\x2d58da\x2d5cf8c294761e.mount: Deactivated successfully. Sep 11 23:45:45.392658 systemd[1]: run-netns-cni\x2d9c5fc38b\x2d0258\x2da7f1\x2df5b2\x2d99bc26b6006c.mount: Deactivated successfully. Sep 11 23:45:45.392702 systemd[1]: run-netns-cni\x2d871d8d52\x2dc872\x2d4dca\x2d14c9\x2d335eea791c5e.mount: Deactivated successfully. Sep 11 23:45:45.392757 systemd[1]: run-netns-cni\x2d402ed296\x2d787b\x2d3dde\x2ddb8a\x2dd9990f38874d.mount: Deactivated successfully. Sep 11 23:45:49.163641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442389216.mount: Deactivated successfully. Sep 11 23:45:49.395385 containerd[1521]: time="2025-09-11T23:45:49.395325017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:49.395948 containerd[1521]: time="2025-09-11T23:45:49.395913243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 11 23:45:49.396667 containerd[1521]: time="2025-09-11T23:45:49.396613394Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:49.398313 containerd[1521]: time="2025-09-11T23:45:49.398287148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:49.398790 containerd[1521]: time="2025-09-11T23:45:49.398764449Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.47729437s" Sep 11 23:45:49.398837 containerd[1521]: time="2025-09-11T23:45:49.398796530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 11 23:45:49.408195 containerd[1521]: time="2025-09-11T23:45:49.408166624Z" level=info msg="CreateContainer within sandbox \"022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 23:45:49.424174 containerd[1521]: time="2025-09-11T23:45:49.424038683Z" level=info msg="Container d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:49.436456 containerd[1521]: time="2025-09-11T23:45:49.436390948Z" level=info msg="CreateContainer within sandbox \"022cffe0a4e2cceeb76fdc12eb1df4c04eb2d38ab8d3b80148518901b025b4c3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5\"" Sep 11 23:45:49.437807 containerd[1521]: time="2025-09-11T23:45:49.437463635Z" level=info msg="StartContainer for \"d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5\"" Sep 11 23:45:49.439712 containerd[1521]: time="2025-09-11T23:45:49.439676813Z" level=info msg="connecting to shim d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5" address="unix:///run/containerd/s/9be4b01572a3b3af7338ab9204e6b0a4e64db1bcf6aa66c1981a73b3d56e94be" protocol=ttrpc version=3 Sep 11 23:45:49.466220 systemd[1]: Started cri-containerd-d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5.scope - libcontainer container d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5. Sep 11 23:45:49.510082 containerd[1521]: time="2025-09-11T23:45:49.510043556Z" level=info msg="StartContainer for \"d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5\" returns successfully" Sep 11 23:45:49.633107 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 23:45:49.633217 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 23:45:49.858459 kubelet[2649]: I0911 23:45:49.858330 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-ca-bundle\") pod \"3c7be61c-2b7b-4b28-a924-a8d64028218e\" (UID: \"3c7be61c-2b7b-4b28-a924-a8d64028218e\") " Sep 11 23:45:49.858459 kubelet[2649]: I0911 23:45:49.858405 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-backend-key-pair\") pod \"3c7be61c-2b7b-4b28-a924-a8d64028218e\" (UID: \"3c7be61c-2b7b-4b28-a924-a8d64028218e\") " Sep 11 23:45:49.858459 kubelet[2649]: I0911 23:45:49.858434 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8pxc\" (UniqueName: \"kubernetes.io/projected/3c7be61c-2b7b-4b28-a924-a8d64028218e-kube-api-access-v8pxc\") pod \"3c7be61c-2b7b-4b28-a924-a8d64028218e\" (UID: \"3c7be61c-2b7b-4b28-a924-a8d64028218e\") " Sep 11 23:45:49.862857 kubelet[2649]: I0911 23:45:49.862660 2649 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3c7be61c-2b7b-4b28-a924-a8d64028218e" (UID: "3c7be61c-2b7b-4b28-a924-a8d64028218e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 23:45:49.875438 kubelet[2649]: I0911 23:45:49.875378 2649 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7be61c-2b7b-4b28-a924-a8d64028218e-kube-api-access-v8pxc" (OuterVolumeSpecName: "kube-api-access-v8pxc") pod "3c7be61c-2b7b-4b28-a924-a8d64028218e" (UID: "3c7be61c-2b7b-4b28-a924-a8d64028218e"). InnerVolumeSpecName "kube-api-access-v8pxc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 23:45:49.875766 kubelet[2649]: I0911 23:45:49.875693 2649 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3c7be61c-2b7b-4b28-a924-a8d64028218e" (UID: "3c7be61c-2b7b-4b28-a924-a8d64028218e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 23:45:49.939565 systemd[1]: Removed slice kubepods-besteffort-pod3c7be61c_2b7b_4b28_a924_a8d64028218e.slice - libcontainer container kubepods-besteffort-pod3c7be61c_2b7b_4b28_a924_a8d64028218e.slice. Sep 11 23:45:49.952032 kubelet[2649]: I0911 23:45:49.951961 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5rf8f" podStartSLOduration=1.403744106 podStartE2EDuration="13.951926439s" podCreationTimestamp="2025-09-11 23:45:36 +0000 UTC" firstStartedPulling="2025-09-11 23:45:36.851274226 +0000 UTC m=+19.106419281" lastFinishedPulling="2025-09-11 23:45:49.399456559 +0000 UTC m=+31.654601614" observedRunningTime="2025-09-11 23:45:49.951276771 +0000 UTC m=+32.206421826" watchObservedRunningTime="2025-09-11 23:45:49.951926439 +0000 UTC m=+32.207071534" Sep 11 23:45:49.959193 kubelet[2649]: I0911 23:45:49.959148 2649 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 23:45:49.959193 kubelet[2649]: I0911 23:45:49.959183 2649 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c7be61c-2b7b-4b28-a924-a8d64028218e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 23:45:49.959193 kubelet[2649]: I0911 23:45:49.959194 2649 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v8pxc\" (UniqueName: \"kubernetes.io/projected/3c7be61c-2b7b-4b28-a924-a8d64028218e-kube-api-access-v8pxc\") on node \"localhost\" DevicePath \"\"" Sep 11 23:45:50.001784 systemd[1]: Created slice kubepods-besteffort-pod5894029a_8e5a_432b_9a1c_52c23349d41b.slice - libcontainer container kubepods-besteffort-pod5894029a_8e5a_432b_9a1c_52c23349d41b.slice. Sep 11 23:45:50.060846 kubelet[2649]: I0911 23:45:50.060490 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5894029a-8e5a-432b-9a1c-52c23349d41b-whisker-backend-key-pair\") pod \"whisker-5798456644-57hft\" (UID: \"5894029a-8e5a-432b-9a1c-52c23349d41b\") " pod="calico-system/whisker-5798456644-57hft" Sep 11 23:45:50.060846 kubelet[2649]: I0911 23:45:50.060539 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5894029a-8e5a-432b-9a1c-52c23349d41b-whisker-ca-bundle\") pod \"whisker-5798456644-57hft\" (UID: \"5894029a-8e5a-432b-9a1c-52c23349d41b\") " pod="calico-system/whisker-5798456644-57hft" Sep 11 23:45:50.060846 kubelet[2649]: I0911 23:45:50.060563 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69sh8\" (UniqueName: \"kubernetes.io/projected/5894029a-8e5a-432b-9a1c-52c23349d41b-kube-api-access-69sh8\") pod \"whisker-5798456644-57hft\" (UID: \"5894029a-8e5a-432b-9a1c-52c23349d41b\") " pod="calico-system/whisker-5798456644-57hft" Sep 11 23:45:50.166472 systemd[1]: var-lib-kubelet-pods-3c7be61c\x2d2b7b\x2d4b28\x2da924\x2da8d64028218e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8pxc.mount: Deactivated successfully. Sep 11 23:45:50.166559 systemd[1]: var-lib-kubelet-pods-3c7be61c\x2d2b7b\x2d4b28\x2da924\x2da8d64028218e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 23:45:50.308462 containerd[1521]: time="2025-09-11T23:45:50.308419090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5798456644-57hft,Uid:5894029a-8e5a-432b-9a1c-52c23349d41b,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:50.477502 systemd-networkd[1413]: calie0510454745: Link UP Sep 11 23:45:50.477786 systemd-networkd[1413]: calie0510454745: Gained carrier Sep 11 23:45:50.491887 containerd[1521]: 2025-09-11 23:45:50.331 [INFO][3782] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 23:45:50.491887 containerd[1521]: 2025-09-11 23:45:50.363 [INFO][3782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5798456644--57hft-eth0 whisker-5798456644- calico-system 5894029a-8e5a-432b-9a1c-52c23349d41b 862 0 2025-09-11 23:45:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5798456644 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5798456644-57hft eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie0510454745 [] [] }} ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-" Sep 11 23:45:50.491887 containerd[1521]: 2025-09-11 23:45:50.363 [INFO][3782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-eth0" Sep 11 23:45:50.491887 containerd[1521]: 2025-09-11 23:45:50.431 [INFO][3797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" HandleID="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Workload="localhost-k8s-whisker--5798456644--57hft-eth0" Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.431 [INFO][3797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" HandleID="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Workload="localhost-k8s-whisker--5798456644--57hft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5798456644-57hft", "timestamp":"2025-09-11 23:45:50.431568093 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.431 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.431 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.431 [INFO][3797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.442 [INFO][3797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" host="localhost" Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.449 [INFO][3797] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.453 [INFO][3797] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.455 [INFO][3797] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.457 [INFO][3797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:50.492272 containerd[1521]: 2025-09-11 23:45:50.457 [INFO][3797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" host="localhost" Sep 11 23:45:50.492483 containerd[1521]: 2025-09-11 23:45:50.459 [INFO][3797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305 Sep 11 23:45:50.492483 containerd[1521]: 2025-09-11 23:45:50.463 [INFO][3797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" host="localhost" Sep 11 23:45:50.492483 containerd[1521]: 2025-09-11 23:45:50.467 [INFO][3797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" host="localhost" Sep 11 23:45:50.492483 containerd[1521]: 2025-09-11 23:45:50.467 [INFO][3797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" host="localhost" Sep 11 23:45:50.492483 containerd[1521]: 2025-09-11 23:45:50.467 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:50.492483 containerd[1521]: 2025-09-11 23:45:50.467 [INFO][3797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" HandleID="k8s-pod-network.9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Workload="localhost-k8s-whisker--5798456644--57hft-eth0" Sep 11 23:45:50.492634 containerd[1521]: 2025-09-11 23:45:50.470 [INFO][3782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5798456644--57hft-eth0", GenerateName:"whisker-5798456644-", Namespace:"calico-system", SelfLink:"", UID:"5894029a-8e5a-432b-9a1c-52c23349d41b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5798456644", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5798456644-57hft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0510454745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:50.492634 containerd[1521]: 2025-09-11 23:45:50.470 [INFO][3782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-eth0" Sep 11 23:45:50.492704 containerd[1521]: 2025-09-11 23:45:50.471 [INFO][3782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0510454745 ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-eth0" Sep 11 23:45:50.492704 containerd[1521]: 2025-09-11 23:45:50.477 [INFO][3782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-eth0" Sep 11 23:45:50.492740 containerd[1521]: 2025-09-11 23:45:50.478 [INFO][3782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5798456644--57hft-eth0", GenerateName:"whisker-5798456644-", Namespace:"calico-system", SelfLink:"", UID:"5894029a-8e5a-432b-9a1c-52c23349d41b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5798456644", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305", Pod:"whisker-5798456644-57hft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0510454745", MAC:"52:a7:a8:61:21:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:50.492790 containerd[1521]: 2025-09-11 23:45:50.489 [INFO][3782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" Namespace="calico-system" Pod="whisker-5798456644-57hft" WorkloadEndpoint="localhost-k8s-whisker--5798456644--57hft-eth0" Sep 11 23:45:50.522423 containerd[1521]: time="2025-09-11T23:45:50.522375279Z" level=info msg="connecting to shim 9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305" address="unix:///run/containerd/s/da456c914a6b7a282626785bcdb230a45bce75482b767194412dd44a8a895c9c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:50.553031 systemd[1]: Started cri-containerd-9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305.scope - libcontainer container 9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305. Sep 11 23:45:50.583126 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:45:50.603803 containerd[1521]: time="2025-09-11T23:45:50.603740583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5798456644-57hft,Uid:5894029a-8e5a-432b-9a1c-52c23349d41b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305\"" Sep 11 23:45:50.605959 containerd[1521]: time="2025-09-11T23:45:50.605197365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 23:45:51.161515 containerd[1521]: time="2025-09-11T23:45:51.161453337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5\" id:\"139e9695ff4b346f4fd038e8daadb7b1ed5cbbfc367b82ce96cde093fbcda6f5\" pid:3966 exit_status:1 exited_at:{seconds:1757634351 nanos:160263608}" Sep 11 23:45:51.367349 systemd-networkd[1413]: vxlan.calico: Link UP Sep 11 23:45:51.367358 systemd-networkd[1413]: vxlan.calico: Gained carrier Sep 11 23:45:51.741697 containerd[1521]: time="2025-09-11T23:45:51.741654770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:51.743162 containerd[1521]: time="2025-09-11T23:45:51.743128711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 11 23:45:51.744863 containerd[1521]: time="2025-09-11T23:45:51.744143553Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:51.746040 containerd[1521]: time="2025-09-11T23:45:51.745869344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:51.746533 containerd[1521]: time="2025-09-11T23:45:51.746504690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.140543413s" Sep 11 23:45:51.746619 containerd[1521]: time="2025-09-11T23:45:51.746604814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 11 23:45:51.748821 containerd[1521]: time="2025-09-11T23:45:51.748798544Z" level=info msg="CreateContainer within sandbox \"9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 23:45:51.754731 containerd[1521]: time="2025-09-11T23:45:51.754702867Z" level=info msg="Container 1554a350542eb08a1982df1f6a631ec2dc45ec5136856d2180a777df68d182bd: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:51.762987 containerd[1521]: time="2025-09-11T23:45:51.762952647Z" level=info msg="CreateContainer within sandbox \"9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1554a350542eb08a1982df1f6a631ec2dc45ec5136856d2180a777df68d182bd\"" Sep 11 23:45:51.763693 containerd[1521]: time="2025-09-11T23:45:51.763665596Z" level=info msg="StartContainer for \"1554a350542eb08a1982df1f6a631ec2dc45ec5136856d2180a777df68d182bd\"" Sep 11 23:45:51.764811 containerd[1521]: time="2025-09-11T23:45:51.764766001Z" level=info msg="connecting to shim 1554a350542eb08a1982df1f6a631ec2dc45ec5136856d2180a777df68d182bd" address="unix:///run/containerd/s/da456c914a6b7a282626785bcdb230a45bce75482b767194412dd44a8a895c9c" protocol=ttrpc version=3 Sep 11 23:45:51.784030 systemd[1]: Started cri-containerd-1554a350542eb08a1982df1f6a631ec2dc45ec5136856d2180a777df68d182bd.scope - libcontainer container 1554a350542eb08a1982df1f6a631ec2dc45ec5136856d2180a777df68d182bd. Sep 11 23:45:51.825907 containerd[1521]: time="2025-09-11T23:45:51.825850035Z" level=info msg="StartContainer for \"1554a350542eb08a1982df1f6a631ec2dc45ec5136856d2180a777df68d182bd\" returns successfully" Sep 11 23:45:51.832897 containerd[1521]: time="2025-09-11T23:45:51.832722078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 23:45:51.841191 kubelet[2649]: I0911 23:45:51.841119 2649 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7be61c-2b7b-4b28-a924-a8d64028218e" path="/var/lib/kubelet/pods/3c7be61c-2b7b-4b28-a924-a8d64028218e/volumes" Sep 11 23:45:52.019740 containerd[1521]: time="2025-09-11T23:45:52.019604902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5\" id:\"16865b178f3fe209409d962001b5fa8b8a3c82bb396d33a6023d520720a2aa12\" pid:4135 exit_status:1 exited_at:{seconds:1757634352 nanos:19299450}" Sep 11 23:45:52.150172 systemd-networkd[1413]: calie0510454745: Gained IPv6LL Sep 11 23:45:52.662039 systemd-networkd[1413]: vxlan.calico: Gained IPv6LL Sep 11 23:45:53.705576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162821659.mount: Deactivated successfully. Sep 11 23:45:53.740164 containerd[1521]: time="2025-09-11T23:45:53.740115551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:53.741056 containerd[1521]: time="2025-09-11T23:45:53.740834899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 11 23:45:53.741854 containerd[1521]: time="2025-09-11T23:45:53.741813697Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:53.744099 containerd[1521]: time="2025-09-11T23:45:53.744055623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:53.744834 containerd[1521]: time="2025-09-11T23:45:53.744804372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.912041493s" Sep 11 23:45:53.744945 containerd[1521]: time="2025-09-11T23:45:53.744927617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 11 23:45:53.748686 containerd[1521]: time="2025-09-11T23:45:53.748641840Z" level=info msg="CreateContainer within sandbox \"9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 23:45:53.758025 containerd[1521]: time="2025-09-11T23:45:53.757968840Z" level=info msg="Container 960eae8787b2b63eb1f05ea4a8febd954ea927ddc40d6fc57ea9ce41f83b710e: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:53.765038 containerd[1521]: time="2025-09-11T23:45:53.764995471Z" level=info msg="CreateContainer within sandbox \"9ba92fa8ea6112b82ecfac117012690b8b259cd4886c364680d11c1574b1a305\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"960eae8787b2b63eb1f05ea4a8febd954ea927ddc40d6fc57ea9ce41f83b710e\"" Sep 11 23:45:53.765911 containerd[1521]: time="2025-09-11T23:45:53.765602054Z" level=info msg="StartContainer for \"960eae8787b2b63eb1f05ea4a8febd954ea927ddc40d6fc57ea9ce41f83b710e\"" Sep 11 23:45:53.768059 containerd[1521]: time="2025-09-11T23:45:53.768025107Z" level=info msg="connecting to shim 960eae8787b2b63eb1f05ea4a8febd954ea927ddc40d6fc57ea9ce41f83b710e" address="unix:///run/containerd/s/da456c914a6b7a282626785bcdb230a45bce75482b767194412dd44a8a895c9c" protocol=ttrpc version=3 Sep 11 23:45:53.788078 systemd[1]: Started cri-containerd-960eae8787b2b63eb1f05ea4a8febd954ea927ddc40d6fc57ea9ce41f83b710e.scope - libcontainer container 960eae8787b2b63eb1f05ea4a8febd954ea927ddc40d6fc57ea9ce41f83b710e. Sep 11 23:45:53.825038 containerd[1521]: time="2025-09-11T23:45:53.824996784Z" level=info msg="StartContainer for \"960eae8787b2b63eb1f05ea4a8febd954ea927ddc40d6fc57ea9ce41f83b710e\" returns successfully" Sep 11 23:45:53.963279 kubelet[2649]: I0911 23:45:53.963099 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5798456644-57hft" podStartSLOduration=1.822287454 podStartE2EDuration="4.963079748s" podCreationTimestamp="2025-09-11 23:45:49 +0000 UTC" firstStartedPulling="2025-09-11 23:45:50.604911233 +0000 UTC m=+32.860056288" lastFinishedPulling="2025-09-11 23:45:53.745703527 +0000 UTC m=+36.000848582" observedRunningTime="2025-09-11 23:45:53.961288959 +0000 UTC m=+36.216433974" watchObservedRunningTime="2025-09-11 23:45:53.963079748 +0000 UTC m=+36.218224843" Sep 11 23:45:56.838549 containerd[1521]: time="2025-09-11T23:45:56.838389393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-km7p8,Uid:b7ff5acd-1257-4129-9bce-3657553f282b,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:45:56.838549 containerd[1521]: time="2025-09-11T23:45:56.838390954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-qd5gx,Uid:6a07ef18-af1f-4f4a-af12-97ce702b3ff4,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:45:56.839022 containerd[1521]: time="2025-09-11T23:45:56.838393754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2v2z,Uid:2c50f3e7-d108-4dc8-8180-0e0ae420aa58,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:57.055586 systemd-networkd[1413]: cali1441a2e3e3f: Link UP Sep 11 23:45:57.055833 systemd-networkd[1413]: cali1441a2e3e3f: Gained carrier Sep 11 23:45:57.072762 containerd[1521]: 2025-09-11 23:45:56.960 [INFO][4207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n2v2z-eth0 csi-node-driver- calico-system 2c50f3e7-d108-4dc8-8180-0e0ae420aa58 682 0 2025-09-11 23:45:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n2v2z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1441a2e3e3f [] [] }} ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-" Sep 11 23:45:57.072762 containerd[1521]: 2025-09-11 23:45:56.961 [INFO][4207] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-eth0" Sep 11 23:45:57.072762 containerd[1521]: 2025-09-11 23:45:57.008 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" HandleID="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Workload="localhost-k8s-csi--node--driver--n2v2z-eth0" Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.008 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" HandleID="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Workload="localhost-k8s-csi--node--driver--n2v2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001376f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n2v2z", "timestamp":"2025-09-11 23:45:57.008182052 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.008 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.008 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.008 [INFO][4253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.024 [INFO][4253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" host="localhost" Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.030 [INFO][4253] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.034 [INFO][4253] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.036 [INFO][4253] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.038 [INFO][4253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:57.072969 containerd[1521]: 2025-09-11 23:45:57.038 [INFO][4253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" host="localhost" Sep 11 23:45:57.073168 containerd[1521]: 2025-09-11 23:45:57.039 [INFO][4253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434 Sep 11 23:45:57.073168 containerd[1521]: 2025-09-11 23:45:57.042 [INFO][4253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" host="localhost" Sep 11 23:45:57.073168 containerd[1521]: 2025-09-11 23:45:57.047 [INFO][4253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" host="localhost" Sep 11 23:45:57.073168 containerd[1521]: 2025-09-11 23:45:57.047 [INFO][4253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" host="localhost" Sep 11 23:45:57.073168 containerd[1521]: 2025-09-11 23:45:57.047 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:57.073168 containerd[1521]: 2025-09-11 23:45:57.047 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" HandleID="k8s-pod-network.ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Workload="localhost-k8s-csi--node--driver--n2v2z-eth0" Sep 11 23:45:57.073288 containerd[1521]: 2025-09-11 23:45:57.051 [INFO][4207] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n2v2z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c50f3e7-d108-4dc8-8180-0e0ae420aa58", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n2v2z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1441a2e3e3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:57.073339 containerd[1521]: 2025-09-11 23:45:57.052 [INFO][4207] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-eth0" Sep 11 23:45:57.073339 containerd[1521]: 2025-09-11 23:45:57.052 [INFO][4207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1441a2e3e3f ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-eth0" Sep 11 23:45:57.073339 containerd[1521]: 2025-09-11 23:45:57.055 [INFO][4207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-eth0" Sep 11 23:45:57.073410 containerd[1521]: 2025-09-11 23:45:57.057 [INFO][4207] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n2v2z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c50f3e7-d108-4dc8-8180-0e0ae420aa58", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434", Pod:"csi-node-driver-n2v2z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1441a2e3e3f", MAC:"8e:7f:90:e6:27:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:57.073496 containerd[1521]: 2025-09-11 23:45:57.067 [INFO][4207] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" Namespace="calico-system" Pod="csi-node-driver-n2v2z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2v2z-eth0" Sep 11 23:45:57.094311 containerd[1521]: time="2025-09-11T23:45:57.094162159Z" level=info msg="connecting to shim ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434" address="unix:///run/containerd/s/9a5ddffcf810815649d8ee900daff892b6913fb16e4577deb25fb087b8b44cc8" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:57.121115 systemd[1]: Started cri-containerd-ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434.scope - libcontainer container ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434. Sep 11 23:45:57.169265 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:45:57.201236 systemd-networkd[1413]: calie7cd69bf24b: Link UP Sep 11 23:45:57.202049 systemd-networkd[1413]: calie7cd69bf24b: Gained carrier Sep 11 23:45:57.217257 containerd[1521]: time="2025-09-11T23:45:57.217119294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2v2z,Uid:2c50f3e7-d108-4dc8-8180-0e0ae420aa58,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434\"" Sep 11 23:45:57.218987 containerd[1521]: time="2025-09-11T23:45:57.218716669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 23:45:57.220174 containerd[1521]: 2025-09-11 23:45:56.983 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0 calico-apiserver-5d7f7f4669- calico-apiserver b7ff5acd-1257-4129-9bce-3657553f282b 789 0 2025-09-11 23:45:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7f7f4669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7f7f4669-km7p8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7cd69bf24b [] [] }} ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-" Sep 11 23:45:57.220174 containerd[1521]: 2025-09-11 23:45:56.983 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" Sep 11 23:45:57.220174 containerd[1521]: 2025-09-11 23:45:57.014 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" HandleID="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Workload="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.014 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" HandleID="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Workload="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a37b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7f7f4669-km7p8", "timestamp":"2025-09-11 23:45:57.01427582 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.014 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.047 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.047 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.123 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" host="localhost" Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.143 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.167 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.172 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.177 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:57.220472 containerd[1521]: 2025-09-11 23:45:57.177 [INFO][4263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" host="localhost" Sep 11 23:45:57.220698 containerd[1521]: 2025-09-11 23:45:57.181 [INFO][4263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba Sep 11 23:45:57.220698 containerd[1521]: 2025-09-11 23:45:57.188 [INFO][4263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" host="localhost" Sep 11 23:45:57.220698 containerd[1521]: 2025-09-11 23:45:57.194 [INFO][4263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" host="localhost" Sep 11 23:45:57.220698 containerd[1521]: 2025-09-11 23:45:57.194 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" host="localhost" Sep 11 23:45:57.220698 containerd[1521]: 2025-09-11 23:45:57.194 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:57.220698 containerd[1521]: 2025-09-11 23:45:57.194 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" HandleID="k8s-pod-network.a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Workload="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" Sep 11 23:45:57.220810 containerd[1521]: 2025-09-11 23:45:57.197 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0", GenerateName:"calico-apiserver-5d7f7f4669-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7ff5acd-1257-4129-9bce-3657553f282b", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f7f4669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7f7f4669-km7p8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7cd69bf24b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:57.220859 containerd[1521]: 2025-09-11 23:45:57.197 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" Sep 11 23:45:57.220859 containerd[1521]: 2025-09-11 23:45:57.197 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7cd69bf24b ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" Sep 11 23:45:57.220859 containerd[1521]: 2025-09-11 23:45:57.202 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" Sep 11 23:45:57.221294 containerd[1521]: 2025-09-11 23:45:57.203 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0", GenerateName:"calico-apiserver-5d7f7f4669-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7ff5acd-1257-4129-9bce-3657553f282b", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f7f4669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba", Pod:"calico-apiserver-5d7f7f4669-km7p8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7cd69bf24b", MAC:"5a:74:5b:d5:23:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:57.221356 containerd[1521]: 2025-09-11 23:45:57.215 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-km7p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--km7p8-eth0" Sep 11 23:45:57.241448 containerd[1521]: time="2025-09-11T23:45:57.241330124Z" level=info msg="connecting to shim a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba" address="unix:///run/containerd/s/b19b00d77897cc3393cc2ce621afbc456cefd26a6079cfc756296cf0843798d4" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:57.265077 systemd[1]: Started cri-containerd-a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba.scope - libcontainer container a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba. Sep 11 23:45:57.283140 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:35848.service - OpenSSH per-connection server daemon (10.0.0.1:35848). Sep 11 23:45:57.288912 systemd-networkd[1413]: cali83202cfd395: Link UP Sep 11 23:45:57.289370 systemd-networkd[1413]: cali83202cfd395: Gained carrier Sep 11 23:45:57.305828 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:45:57.315004 containerd[1521]: 2025-09-11 23:45:56.960 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0 calico-apiserver-5d7f7f4669- calico-apiserver 6a07ef18-af1f-4f4a-af12-97ce702b3ff4 787 0 2025-09-11 23:45:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7f7f4669 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7f7f4669-qd5gx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83202cfd395 [] [] }} ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-" Sep 11 23:45:57.315004 containerd[1521]: 2025-09-11 23:45:56.960 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" Sep 11 23:45:57.315004 containerd[1521]: 2025-09-11 23:45:57.014 [INFO][4251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" HandleID="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Workload="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.015 [INFO][4251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" HandleID="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Workload="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051b1a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7f7f4669-qd5gx", "timestamp":"2025-09-11 23:45:57.014874961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.015 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.194 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.196 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.221 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" host="localhost" Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.242 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.257 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.260 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.263 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:57.315181 containerd[1521]: 2025-09-11 23:45:57.264 [INFO][4251] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" host="localhost" Sep 11 23:45:57.315382 containerd[1521]: 2025-09-11 23:45:57.265 [INFO][4251] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45 Sep 11 23:45:57.315382 containerd[1521]: 2025-09-11 23:45:57.270 [INFO][4251] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" host="localhost" Sep 11 23:45:57.315382 containerd[1521]: 2025-09-11 23:45:57.281 [INFO][4251] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" host="localhost" Sep 11 23:45:57.315382 containerd[1521]: 2025-09-11 23:45:57.281 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" host="localhost" Sep 11 23:45:57.315382 containerd[1521]: 2025-09-11 23:45:57.281 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:57.315382 containerd[1521]: 2025-09-11 23:45:57.281 [INFO][4251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" HandleID="k8s-pod-network.40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Workload="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" Sep 11 23:45:57.315495 containerd[1521]: 2025-09-11 23:45:57.287 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0", GenerateName:"calico-apiserver-5d7f7f4669-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a07ef18-af1f-4f4a-af12-97ce702b3ff4", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f7f4669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7f7f4669-qd5gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83202cfd395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:57.315544 containerd[1521]: 2025-09-11 23:45:57.287 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" Sep 11 23:45:57.315544 containerd[1521]: 2025-09-11 23:45:57.287 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83202cfd395 ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" Sep 11 23:45:57.315544 containerd[1521]: 2025-09-11 23:45:57.289 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" Sep 11 23:45:57.315601 containerd[1521]: 2025-09-11 23:45:57.290 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0", GenerateName:"calico-apiserver-5d7f7f4669-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a07ef18-af1f-4f4a-af12-97ce702b3ff4", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f7f4669", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45", Pod:"calico-apiserver-5d7f7f4669-qd5gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83202cfd395", MAC:"0a:53:31:f3:d3:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:57.315660 containerd[1521]: 2025-09-11 23:45:57.302 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f7f4669-qd5gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f7f4669--qd5gx-eth0" Sep 11 23:45:57.352851 containerd[1521]: time="2025-09-11T23:45:57.352620819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-km7p8,Uid:b7ff5acd-1257-4129-9bce-3657553f282b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba\"" Sep 11 23:45:57.365124 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 35848 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:45:57.368450 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:45:57.373110 systemd-logind[1456]: New session 8 of user core. Sep 11 23:45:57.381105 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 23:45:57.384689 containerd[1521]: time="2025-09-11T23:45:57.384636277Z" level=info msg="connecting to shim 40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45" address="unix:///run/containerd/s/2d8a775d11dad6ca8c5244d4ff0e53fb3274a7a08fdc4b7008839a2b2678ac7e" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:57.412137 systemd[1]: Started cri-containerd-40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45.scope - libcontainer container 40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45. Sep 11 23:45:57.423957 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:45:57.446581 containerd[1521]: time="2025-09-11T23:45:57.446543719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f7f4669-qd5gx,Uid:6a07ef18-af1f-4f4a-af12-97ce702b3ff4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45\"" Sep 11 23:45:57.555431 sshd[4420]: Connection closed by 10.0.0.1 port 35848 Sep 11 23:45:57.556116 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Sep 11 23:45:57.560068 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:35848.service: Deactivated successfully. Sep 11 23:45:57.561920 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 23:45:57.562590 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Sep 11 23:45:57.563810 systemd-logind[1456]: Removed session 8. Sep 11 23:45:57.840928 containerd[1521]: time="2025-09-11T23:45:57.840726832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-879ns,Uid:26419464-e9d2-4216-9042-f037752cbf30,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:57.844732 containerd[1521]: time="2025-09-11T23:45:57.844652047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c68f96f4-6lq2g,Uid:c2ab5d0a-092a-4cff-bd38-6146dd672da3,Namespace:calico-system,Attempt:0,}" Sep 11 23:45:57.992507 systemd-networkd[1413]: cali596af7bec4f: Link UP Sep 11 23:45:57.992652 systemd-networkd[1413]: cali596af7bec4f: Gained carrier Sep 11 23:45:58.009638 containerd[1521]: 2025-09-11 23:45:57.875 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--879ns-eth0 goldmane-54d579b49d- calico-system 26419464-e9d2-4216-9042-f037752cbf30 790 0 2025-09-11 23:45:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-879ns eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali596af7bec4f [] [] }} ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-" Sep 11 23:45:58.009638 containerd[1521]: 2025-09-11 23:45:57.875 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-eth0" Sep 11 23:45:58.009638 containerd[1521]: 2025-09-11 23:45:57.909 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" HandleID="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Workload="localhost-k8s-goldmane--54d579b49d--879ns-eth0" Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.910 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" HandleID="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Workload="localhost-k8s-goldmane--54d579b49d--879ns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-879ns", "timestamp":"2025-09-11 23:45:57.909684916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.910 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.910 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.910 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.920 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" host="localhost" Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.933 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.947 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.950 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.958 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:58.009837 containerd[1521]: 2025-09-11 23:45:57.958 [INFO][4496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" host="localhost" Sep 11 23:45:58.010063 containerd[1521]: 2025-09-11 23:45:57.960 [INFO][4496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a Sep 11 23:45:58.010063 containerd[1521]: 2025-09-11 23:45:57.967 [INFO][4496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" host="localhost" Sep 11 23:45:58.010063 containerd[1521]: 2025-09-11 23:45:57.980 [INFO][4496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" host="localhost" Sep 11 23:45:58.010063 containerd[1521]: 2025-09-11 23:45:57.980 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" host="localhost" Sep 11 23:45:58.010063 containerd[1521]: 2025-09-11 23:45:57.980 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:58.010063 containerd[1521]: 2025-09-11 23:45:57.980 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" HandleID="k8s-pod-network.b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Workload="localhost-k8s-goldmane--54d579b49d--879ns-eth0" Sep 11 23:45:58.010171 containerd[1521]: 2025-09-11 23:45:57.990 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--879ns-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"26419464-e9d2-4216-9042-f037752cbf30", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-879ns", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali596af7bec4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:58.010171 containerd[1521]: 2025-09-11 23:45:57.990 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-eth0" Sep 11 23:45:58.010238 containerd[1521]: 2025-09-11 23:45:57.990 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali596af7bec4f ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-eth0" Sep 11 23:45:58.010238 containerd[1521]: 2025-09-11 23:45:57.992 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-eth0" Sep 11 23:45:58.010277 containerd[1521]: 2025-09-11 23:45:57.995 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--879ns-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"26419464-e9d2-4216-9042-f037752cbf30", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a", Pod:"goldmane-54d579b49d-879ns", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali596af7bec4f", MAC:"b6:8e:c6:9f:8d:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:58.010322 containerd[1521]: 2025-09-11 23:45:58.006 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" Namespace="calico-system" Pod="goldmane-54d579b49d-879ns" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--879ns-eth0" Sep 11 23:45:58.026825 containerd[1521]: time="2025-09-11T23:45:58.026658183Z" level=info msg="connecting to shim b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a" address="unix:///run/containerd/s/efe558397717be7853ab7128a29e2c02572e3af9c8f06b15ef3d355089dbd72c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:58.059425 systemd[1]: Started cri-containerd-b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a.scope - libcontainer container b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a. Sep 11 23:45:58.060534 systemd-networkd[1413]: cali2e5cb83d6e3: Link UP Sep 11 23:45:58.060867 systemd-networkd[1413]: cali2e5cb83d6e3: Gained carrier Sep 11 23:45:58.077677 containerd[1521]: 2025-09-11 23:45:57.887 [INFO][4479] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0 calico-kube-controllers-74c68f96f4- calico-system c2ab5d0a-092a-4cff-bd38-6146dd672da3 786 0 2025-09-11 23:45:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74c68f96f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74c68f96f4-6lq2g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2e5cb83d6e3 [] [] }} ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-" Sep 11 23:45:58.077677 containerd[1521]: 2025-09-11 23:45:57.887 [INFO][4479] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" Sep 11 23:45:58.077677 containerd[1521]: 2025-09-11 23:45:57.923 [INFO][4504] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" HandleID="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Workload="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:57.923 [INFO][4504] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" HandleID="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Workload="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003233c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74c68f96f4-6lq2g", "timestamp":"2025-09-11 23:45:57.923514551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:57.926 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:57.980 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:57.980 [INFO][4504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:58.024 [INFO][4504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" host="localhost" Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:58.034 [INFO][4504] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:58.039 [INFO][4504] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:58.042 [INFO][4504] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:58.044 [INFO][4504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:58.077924 containerd[1521]: 2025-09-11 23:45:58.044 [INFO][4504] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" host="localhost" Sep 11 23:45:58.078120 containerd[1521]: 2025-09-11 23:45:58.046 [INFO][4504] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392 Sep 11 23:45:58.078120 containerd[1521]: 2025-09-11 23:45:58.050 [INFO][4504] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" host="localhost" Sep 11 23:45:58.078120 containerd[1521]: 2025-09-11 23:45:58.056 [INFO][4504] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" host="localhost" Sep 11 23:45:58.078120 containerd[1521]: 2025-09-11 23:45:58.056 [INFO][4504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" host="localhost" Sep 11 23:45:58.078120 containerd[1521]: 2025-09-11 23:45:58.056 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:58.078120 containerd[1521]: 2025-09-11 23:45:58.056 [INFO][4504] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" HandleID="k8s-pod-network.34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Workload="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" Sep 11 23:45:58.078227 containerd[1521]: 2025-09-11 23:45:58.058 [INFO][4479] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0", GenerateName:"calico-kube-controllers-74c68f96f4-", Namespace:"calico-system", SelfLink:"", UID:"c2ab5d0a-092a-4cff-bd38-6146dd672da3", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c68f96f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74c68f96f4-6lq2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e5cb83d6e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:58.078386 containerd[1521]: 2025-09-11 23:45:58.059 [INFO][4479] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" Sep 11 23:45:58.078386 containerd[1521]: 2025-09-11 23:45:58.059 [INFO][4479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e5cb83d6e3 ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" Sep 11 23:45:58.078386 containerd[1521]: 2025-09-11 23:45:58.060 [INFO][4479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" Sep 11 23:45:58.078446 containerd[1521]: 2025-09-11 23:45:58.061 [INFO][4479] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0", GenerateName:"calico-kube-controllers-74c68f96f4-", Namespace:"calico-system", SelfLink:"", UID:"c2ab5d0a-092a-4cff-bd38-6146dd672da3", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c68f96f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392", Pod:"calico-kube-controllers-74c68f96f4-6lq2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e5cb83d6e3", MAC:"9e:20:cc:d4:1a:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:58.078525 containerd[1521]: 2025-09-11 23:45:58.071 [INFO][4479] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" Namespace="calico-system" Pod="calico-kube-controllers-74c68f96f4-6lq2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c68f96f4--6lq2g-eth0" Sep 11 23:45:58.082407 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:45:58.113390 containerd[1521]: time="2025-09-11T23:45:58.113102388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-879ns,Uid:26419464-e9d2-4216-9042-f037752cbf30,Namespace:calico-system,Attempt:0,} returns sandbox id \"b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a\"" Sep 11 23:45:58.120702 containerd[1521]: time="2025-09-11T23:45:58.120648440Z" level=info msg="connecting to shim 34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392" address="unix:///run/containerd/s/dbe9becd0fb74a62bbec155b8b3057ccb4064fe9f247180b5310e5395a870338" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:58.176029 systemd[1]: Started cri-containerd-34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392.scope - libcontainer container 34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392. Sep 11 23:45:58.187851 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:45:58.214388 containerd[1521]: time="2025-09-11T23:45:58.214347447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c68f96f4-6lq2g,Uid:c2ab5d0a-092a-4cff-bd38-6146dd672da3,Namespace:calico-system,Attempt:0,} returns sandbox id \"34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392\"" Sep 11 23:45:58.300519 containerd[1521]: time="2025-09-11T23:45:58.300017386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:58.300732 containerd[1521]: time="2025-09-11T23:45:58.300680288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 11 23:45:58.301371 containerd[1521]: time="2025-09-11T23:45:58.301342350Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:58.303294 containerd[1521]: time="2025-09-11T23:45:58.303267295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:45:58.304036 containerd[1521]: time="2025-09-11T23:45:58.304003399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.085012641s" Sep 11 23:45:58.304072 containerd[1521]: time="2025-09-11T23:45:58.304039200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 11 23:45:58.306531 containerd[1521]: time="2025-09-11T23:45:58.305415766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 23:45:58.306531 containerd[1521]: time="2025-09-11T23:45:58.306239914Z" level=info msg="CreateContainer within sandbox \"ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 23:45:58.314078 containerd[1521]: time="2025-09-11T23:45:58.314017613Z" level=info msg="Container 872a64a5b22d9c4fbbcf91d7319e3df8268b529e33e0525e82c64d1dbc1ed516: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:58.324725 containerd[1521]: time="2025-09-11T23:45:58.324617207Z" level=info msg="CreateContainer within sandbox \"ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"872a64a5b22d9c4fbbcf91d7319e3df8268b529e33e0525e82c64d1dbc1ed516\"" Sep 11 23:45:58.325138 containerd[1521]: time="2025-09-11T23:45:58.325110504Z" level=info msg="StartContainer for \"872a64a5b22d9c4fbbcf91d7319e3df8268b529e33e0525e82c64d1dbc1ed516\"" Sep 11 23:45:58.326872 containerd[1521]: time="2025-09-11T23:45:58.326846642Z" level=info msg="connecting to shim 872a64a5b22d9c4fbbcf91d7319e3df8268b529e33e0525e82c64d1dbc1ed516" address="unix:///run/containerd/s/9a5ddffcf810815649d8ee900daff892b6913fb16e4577deb25fb087b8b44cc8" protocol=ttrpc version=3 Sep 11 23:45:58.348032 systemd[1]: Started cri-containerd-872a64a5b22d9c4fbbcf91d7319e3df8268b529e33e0525e82c64d1dbc1ed516.scope - libcontainer container 872a64a5b22d9c4fbbcf91d7319e3df8268b529e33e0525e82c64d1dbc1ed516. Sep 11 23:45:58.394551 containerd[1521]: time="2025-09-11T23:45:58.394458978Z" level=info msg="StartContainer for \"872a64a5b22d9c4fbbcf91d7319e3df8268b529e33e0525e82c64d1dbc1ed516\" returns successfully" Sep 11 23:45:58.550061 systemd-networkd[1413]: cali1441a2e3e3f: Gained IPv6LL Sep 11 23:45:58.806065 systemd-networkd[1413]: cali83202cfd395: Gained IPv6LL Sep 11 23:45:58.838752 containerd[1521]: time="2025-09-11T23:45:58.838709325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgpqc,Uid:bd6c0474-4722-46bb-92d1-451df3477b61,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:58.934037 systemd-networkd[1413]: calie7cd69bf24b: Gained IPv6LL Sep 11 23:45:58.955084 systemd-networkd[1413]: calia49b8726c16: Link UP Sep 11 23:45:58.955939 systemd-networkd[1413]: calia49b8726c16: Gained carrier Sep 11 23:45:58.968733 containerd[1521]: 2025-09-11 23:45:58.874 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0 coredns-668d6bf9bc- kube-system bd6c0474-4722-46bb-92d1-451df3477b61 779 0 2025-09-11 23:45:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-cgpqc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia49b8726c16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-" Sep 11 23:45:58.968733 containerd[1521]: 2025-09-11 23:45:58.875 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" Sep 11 23:45:58.968733 containerd[1521]: 2025-09-11 23:45:58.912 [INFO][4672] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" HandleID="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Workload="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.912 [INFO][4672] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" HandleID="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Workload="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004953f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-cgpqc", "timestamp":"2025-09-11 23:45:58.9122651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.912 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.912 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.912 [INFO][4672] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.922 [INFO][4672] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" host="localhost" Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.926 [INFO][4672] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.930 [INFO][4672] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.932 [INFO][4672] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.935 [INFO][4672] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:58.969119 containerd[1521]: 2025-09-11 23:45:58.935 [INFO][4672] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" host="localhost" Sep 11 23:45:58.969323 containerd[1521]: 2025-09-11 23:45:58.938 [INFO][4672] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983 Sep 11 23:45:58.969323 containerd[1521]: 2025-09-11 23:45:58.942 [INFO][4672] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" host="localhost" Sep 11 23:45:58.969323 containerd[1521]: 2025-09-11 23:45:58.948 [INFO][4672] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" host="localhost" Sep 11 23:45:58.969323 containerd[1521]: 2025-09-11 23:45:58.948 [INFO][4672] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" host="localhost" Sep 11 23:45:58.969323 containerd[1521]: 2025-09-11 23:45:58.948 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:58.969323 containerd[1521]: 2025-09-11 23:45:58.948 [INFO][4672] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" HandleID="k8s-pod-network.56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Workload="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" Sep 11 23:45:58.969436 containerd[1521]: 2025-09-11 23:45:58.950 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd6c0474-4722-46bb-92d1-451df3477b61", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-cgpqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49b8726c16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:58.969500 containerd[1521]: 2025-09-11 23:45:58.951 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" Sep 11 23:45:58.969500 containerd[1521]: 2025-09-11 23:45:58.951 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia49b8726c16 ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" Sep 11 23:45:58.969500 containerd[1521]: 2025-09-11 23:45:58.956 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" Sep 11 23:45:58.969559 containerd[1521]: 2025-09-11 23:45:58.956 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd6c0474-4722-46bb-92d1-451df3477b61", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983", Pod:"coredns-668d6bf9bc-cgpqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49b8726c16", MAC:"72:8a:86:5a:4b:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:58.969559 containerd[1521]: 2025-09-11 23:45:58.964 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgpqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cgpqc-eth0" Sep 11 23:45:58.999249 containerd[1521]: time="2025-09-11T23:45:58.999202801Z" level=info msg="connecting to shim 56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983" address="unix:///run/containerd/s/7be2671e2ef264ee2c44ea570c627c01379d850434dfefb4751ee6e6f36a9dd2" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:45:59.029042 systemd[1]: Started cri-containerd-56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983.scope - libcontainer container 56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983. Sep 11 23:45:59.039827 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:45:59.060214 containerd[1521]: time="2025-09-11T23:45:59.060122864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgpqc,Uid:bd6c0474-4722-46bb-92d1-451df3477b61,Namespace:kube-system,Attempt:0,} returns sandbox id \"56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983\"" Sep 11 23:45:59.063176 containerd[1521]: time="2025-09-11T23:45:59.063113641Z" level=info msg="CreateContainer within sandbox \"56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 23:45:59.079396 containerd[1521]: time="2025-09-11T23:45:59.078797551Z" level=info msg="Container dcc5f0284cf0231861ae09cccbea060171d2e0186ae98d497749fe35d24932fd: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:45:59.086780 containerd[1521]: time="2025-09-11T23:45:59.086747650Z" level=info msg="CreateContainer within sandbox \"56eac305b1e5e4456f6a63a57cb9222d2510b955ed63c488a7b905334d25f983\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcc5f0284cf0231861ae09cccbea060171d2e0186ae98d497749fe35d24932fd\"" Sep 11 23:45:59.087608 containerd[1521]: time="2025-09-11T23:45:59.087542796Z" level=info msg="StartContainer for \"dcc5f0284cf0231861ae09cccbea060171d2e0186ae98d497749fe35d24932fd\"" Sep 11 23:45:59.088382 containerd[1521]: time="2025-09-11T23:45:59.088354542Z" level=info msg="connecting to shim dcc5f0284cf0231861ae09cccbea060171d2e0186ae98d497749fe35d24932fd" address="unix:///run/containerd/s/7be2671e2ef264ee2c44ea570c627c01379d850434dfefb4751ee6e6f36a9dd2" protocol=ttrpc version=3 Sep 11 23:45:59.116079 systemd[1]: Started cri-containerd-dcc5f0284cf0231861ae09cccbea060171d2e0186ae98d497749fe35d24932fd.scope - libcontainer container dcc5f0284cf0231861ae09cccbea060171d2e0186ae98d497749fe35d24932fd. Sep 11 23:45:59.126989 systemd-networkd[1413]: cali2e5cb83d6e3: Gained IPv6LL Sep 11 23:45:59.153641 containerd[1521]: time="2025-09-11T23:45:59.153603824Z" level=info msg="StartContainer for \"dcc5f0284cf0231861ae09cccbea060171d2e0186ae98d497749fe35d24932fd\" returns successfully" Sep 11 23:45:59.191009 systemd-networkd[1413]: cali596af7bec4f: Gained IPv6LL Sep 11 23:45:59.840135 containerd[1521]: time="2025-09-11T23:45:59.840094711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkb89,Uid:15cf90eb-249b-4f72-bbf9-ef8ea7c68422,Namespace:kube-system,Attempt:0,}" Sep 11 23:45:59.978836 systemd-networkd[1413]: cali772f8906d1d: Link UP Sep 11 23:45:59.979205 systemd-networkd[1413]: cali772f8906d1d: Gained carrier Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.888 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--bkb89-eth0 coredns-668d6bf9bc- kube-system 15cf90eb-249b-4f72-bbf9-ef8ea7c68422 785 0 2025-09-11 23:45:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-bkb89 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali772f8906d1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.888 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.920 [INFO][4790] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" HandleID="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Workload="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.920 [INFO][4790] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" HandleID="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Workload="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-bkb89", "timestamp":"2025-09-11 23:45:59.920170116 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.920 [INFO][4790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.920 [INFO][4790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.920 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.930 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.936 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.951 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.957 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.960 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.960 [INFO][4790] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.962 [INFO][4790] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474 Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.966 [INFO][4790] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.973 [INFO][4790] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.974 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" host="localhost" Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.974 [INFO][4790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:45:59.996907 containerd[1521]: 2025-09-11 23:45:59.974 [INFO][4790] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" HandleID="k8s-pod-network.b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Workload="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" Sep 11 23:45:59.998611 containerd[1521]: 2025-09-11 23:45:59.977 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bkb89-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"15cf90eb-249b-4f72-bbf9-ef8ea7c68422", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-bkb89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali772f8906d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:59.998611 containerd[1521]: 2025-09-11 23:45:59.977 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" Sep 11 23:45:59.998611 containerd[1521]: 2025-09-11 23:45:59.977 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali772f8906d1d ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" Sep 11 23:45:59.998611 containerd[1521]: 2025-09-11 23:45:59.979 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" Sep 11 23:45:59.998611 containerd[1521]: 2025-09-11 23:45:59.979 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bkb89-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"15cf90eb-249b-4f72-bbf9-ef8ea7c68422", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474", Pod:"coredns-668d6bf9bc-bkb89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali772f8906d1d", MAC:"7e:b4:37:47:8a:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:45:59.998611 containerd[1521]: 2025-09-11 23:45:59.993 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" Namespace="kube-system" Pod="coredns-668d6bf9bc-bkb89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bkb89-eth0" Sep 11 23:46:00.051911 kubelet[2649]: I0911 23:46:00.051070 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cgpqc" podStartSLOduration=35.051052372 podStartE2EDuration="35.051052372s" podCreationTimestamp="2025-09-11 23:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:46:00.050250467 +0000 UTC m=+42.305395562" watchObservedRunningTime="2025-09-11 23:46:00.051052372 +0000 UTC m=+42.306197427" Sep 11 23:46:00.073332 containerd[1521]: time="2025-09-11T23:46:00.073246757Z" level=info msg="connecting to shim b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474" address="unix:///run/containerd/s/4b84684bbaff56b5d0619888b4128be25ee3d186801c8d2607c986b1b51e7a76" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:46:00.113226 systemd[1]: Started cri-containerd-b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474.scope - libcontainer container b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474. Sep 11 23:46:00.131919 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:46:00.157264 containerd[1521]: time="2025-09-11T23:46:00.157215461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bkb89,Uid:15cf90eb-249b-4f72-bbf9-ef8ea7c68422,Namespace:kube-system,Attempt:0,} returns sandbox id \"b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474\"" Sep 11 23:46:00.161868 containerd[1521]: time="2025-09-11T23:46:00.161833207Z" level=info msg="CreateContainer within sandbox \"b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 23:46:00.180075 containerd[1521]: time="2025-09-11T23:46:00.180036185Z" level=info msg="Container f79020373b2733f9c1a0ef5b4c7677cce38286dd4ecd05b798ed745640b7f2e1: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:46:00.181716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167095085.mount: Deactivated successfully. Sep 11 23:46:00.190282 containerd[1521]: time="2025-09-11T23:46:00.190203227Z" level=info msg="CreateContainer within sandbox \"b870aee685385dd75df1d34e7e3fee184737fc96866c3079a7d8b886be5de474\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f79020373b2733f9c1a0ef5b4c7677cce38286dd4ecd05b798ed745640b7f2e1\"" Sep 11 23:46:00.190846 containerd[1521]: time="2025-09-11T23:46:00.190811566Z" level=info msg="StartContainer for \"f79020373b2733f9c1a0ef5b4c7677cce38286dd4ecd05b798ed745640b7f2e1\"" Sep 11 23:46:00.191685 containerd[1521]: time="2025-09-11T23:46:00.191637153Z" level=info msg="connecting to shim f79020373b2733f9c1a0ef5b4c7677cce38286dd4ecd05b798ed745640b7f2e1" address="unix:///run/containerd/s/4b84684bbaff56b5d0619888b4128be25ee3d186801c8d2607c986b1b51e7a76" protocol=ttrpc version=3 Sep 11 23:46:00.215013 systemd-networkd[1413]: calia49b8726c16: Gained IPv6LL Sep 11 23:46:00.216084 systemd[1]: Started cri-containerd-f79020373b2733f9c1a0ef5b4c7677cce38286dd4ecd05b798ed745640b7f2e1.scope - libcontainer container f79020373b2733f9c1a0ef5b4c7677cce38286dd4ecd05b798ed745640b7f2e1. Sep 11 23:46:00.266805 containerd[1521]: time="2025-09-11T23:46:00.266755456Z" level=info msg="StartContainer for \"f79020373b2733f9c1a0ef5b4c7677cce38286dd4ecd05b798ed745640b7f2e1\" returns successfully" Sep 11 23:46:00.278811 containerd[1521]: time="2025-09-11T23:46:00.278763077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:00.278996 containerd[1521]: time="2025-09-11T23:46:00.278960923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 11 23:46:00.285444 containerd[1521]: time="2025-09-11T23:46:00.285032476Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:00.287165 containerd[1521]: time="2025-09-11T23:46:00.287099541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:00.287696 containerd[1521]: time="2025-09-11T23:46:00.287664759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.982225352s" Sep 11 23:46:00.287755 containerd[1521]: time="2025-09-11T23:46:00.287695640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 11 23:46:00.288876 containerd[1521]: time="2025-09-11T23:46:00.288852997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 23:46:00.292286 containerd[1521]: time="2025-09-11T23:46:00.292064499Z" level=info msg="CreateContainer within sandbox \"a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 23:46:00.298938 containerd[1521]: time="2025-09-11T23:46:00.298909836Z" level=info msg="Container 5dfaf0e21acc489c192a0857062a7c4b1f6d87948735c943b98edb4aa344e849: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:46:00.306902 containerd[1521]: time="2025-09-11T23:46:00.306812807Z" level=info msg="CreateContainer within sandbox \"a294ff76be3c8654fac803d5a71c56200c02e25adf710d1056f8a81a0cf0c4ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5dfaf0e21acc489c192a0857062a7c4b1f6d87948735c943b98edb4aa344e849\"" Sep 11 23:46:00.307636 containerd[1521]: time="2025-09-11T23:46:00.307604112Z" level=info msg="StartContainer for \"5dfaf0e21acc489c192a0857062a7c4b1f6d87948735c943b98edb4aa344e849\"" Sep 11 23:46:00.308545 containerd[1521]: time="2025-09-11T23:46:00.308512421Z" level=info msg="connecting to shim 5dfaf0e21acc489c192a0857062a7c4b1f6d87948735c943b98edb4aa344e849" address="unix:///run/containerd/s/b19b00d77897cc3393cc2ce621afbc456cefd26a6079cfc756296cf0843798d4" protocol=ttrpc version=3 Sep 11 23:46:00.338041 systemd[1]: Started cri-containerd-5dfaf0e21acc489c192a0857062a7c4b1f6d87948735c943b98edb4aa344e849.scope - libcontainer container 5dfaf0e21acc489c192a0857062a7c4b1f6d87948735c943b98edb4aa344e849. Sep 11 23:46:00.371020 containerd[1521]: time="2025-09-11T23:46:00.370527548Z" level=info msg="StartContainer for \"5dfaf0e21acc489c192a0857062a7c4b1f6d87948735c943b98edb4aa344e849\" returns successfully" Sep 11 23:46:00.569182 containerd[1521]: time="2025-09-11T23:46:00.569133129Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:00.570152 containerd[1521]: time="2025-09-11T23:46:00.570120800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 23:46:00.572269 containerd[1521]: time="2025-09-11T23:46:00.572237028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 283.182464ms" Sep 11 23:46:00.572321 containerd[1521]: time="2025-09-11T23:46:00.572281789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 11 23:46:00.573383 containerd[1521]: time="2025-09-11T23:46:00.573352503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 23:46:00.575668 containerd[1521]: time="2025-09-11T23:46:00.575634255Z" level=info msg="CreateContainer within sandbox \"40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 23:46:00.583069 containerd[1521]: time="2025-09-11T23:46:00.583036370Z" level=info msg="Container 8a947a0cda81de82812bca09e55046d8859f62d6d4e03185d033d0ba005379fe: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:46:00.589504 containerd[1521]: time="2025-09-11T23:46:00.589470054Z" level=info msg="CreateContainer within sandbox \"40df5af0f447fb8e0e652e00e2d8697d0a8f9a35de7abfd153b9a3826ca82f45\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a947a0cda81de82812bca09e55046d8859f62d6d4e03185d033d0ba005379fe\"" Sep 11 23:46:00.590060 containerd[1521]: time="2025-09-11T23:46:00.590035352Z" level=info msg="StartContainer for \"8a947a0cda81de82812bca09e55046d8859f62d6d4e03185d033d0ba005379fe\"" Sep 11 23:46:00.592292 containerd[1521]: time="2025-09-11T23:46:00.592161980Z" level=info msg="connecting to shim 8a947a0cda81de82812bca09e55046d8859f62d6d4e03185d033d0ba005379fe" address="unix:///run/containerd/s/2d8a775d11dad6ca8c5244d4ff0e53fb3274a7a08fdc4b7008839a2b2678ac7e" protocol=ttrpc version=3 Sep 11 23:46:00.614061 systemd[1]: Started cri-containerd-8a947a0cda81de82812bca09e55046d8859f62d6d4e03185d033d0ba005379fe.scope - libcontainer container 8a947a0cda81de82812bca09e55046d8859f62d6d4e03185d033d0ba005379fe. Sep 11 23:46:00.656547 containerd[1521]: time="2025-09-11T23:46:00.656450019Z" level=info msg="StartContainer for \"8a947a0cda81de82812bca09e55046d8859f62d6d4e03185d033d0ba005379fe\" returns successfully" Sep 11 23:46:01.050643 kubelet[2649]: I0911 23:46:01.050534 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7f7f4669-km7p8" podStartSLOduration=25.116327057 podStartE2EDuration="28.050519205s" podCreationTimestamp="2025-09-11 23:45:33 +0000 UTC" firstStartedPulling="2025-09-11 23:45:57.354532085 +0000 UTC m=+39.609677140" lastFinishedPulling="2025-09-11 23:46:00.288724233 +0000 UTC m=+42.543869288" observedRunningTime="2025-09-11 23:46:01.030947118 +0000 UTC m=+43.286092213" watchObservedRunningTime="2025-09-11 23:46:01.050519205 +0000 UTC m=+43.305664260" Sep 11 23:46:01.051332 kubelet[2649]: I0911 23:46:01.051276 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7f7f4669-qd5gx" podStartSLOduration=24.926251945 podStartE2EDuration="28.051265428s" podCreationTimestamp="2025-09-11 23:45:33 +0000 UTC" firstStartedPulling="2025-09-11 23:45:57.44801261 +0000 UTC m=+39.703157665" lastFinishedPulling="2025-09-11 23:46:00.573026013 +0000 UTC m=+42.828171148" observedRunningTime="2025-09-11 23:46:01.050330239 +0000 UTC m=+43.305475294" watchObservedRunningTime="2025-09-11 23:46:01.051265428 +0000 UTC m=+43.306410483" Sep 11 23:46:01.065147 kubelet[2649]: I0911 23:46:01.065089 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bkb89" podStartSLOduration=36.065073935 podStartE2EDuration="36.065073935s" podCreationTimestamp="2025-09-11 23:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:46:01.064488477 +0000 UTC m=+43.319633532" watchObservedRunningTime="2025-09-11 23:46:01.065073935 +0000 UTC m=+43.320218950" Sep 11 23:46:01.110305 systemd-networkd[1413]: cali772f8906d1d: Gained IPv6LL Sep 11 23:46:02.033895 kubelet[2649]: I0911 23:46:02.033465 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:46:02.033895 kubelet[2649]: I0911 23:46:02.033465 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:46:02.417931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1152295649.mount: Deactivated successfully. Sep 11 23:46:02.573474 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:32972.service - OpenSSH per-connection server daemon (10.0.0.1:32972). Sep 11 23:46:02.652551 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 32972 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:02.654408 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:02.659222 systemd-logind[1456]: New session 9 of user core. Sep 11 23:46:02.671123 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 23:46:02.870958 containerd[1521]: time="2025-09-11T23:46:02.870908308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:02.872096 containerd[1521]: time="2025-09-11T23:46:02.872069903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 11 23:46:02.873053 containerd[1521]: time="2025-09-11T23:46:02.873023812Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:02.876267 containerd[1521]: time="2025-09-11T23:46:02.876185788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:02.877114 containerd[1521]: time="2025-09-11T23:46:02.877087135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.30368159s" Sep 11 23:46:02.877314 containerd[1521]: time="2025-09-11T23:46:02.877209179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 11 23:46:02.878567 containerd[1521]: time="2025-09-11T23:46:02.878540139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 23:46:02.880875 containerd[1521]: time="2025-09-11T23:46:02.880802207Z" level=info msg="CreateContainer within sandbox \"b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 23:46:02.889660 containerd[1521]: time="2025-09-11T23:46:02.887388287Z" level=info msg="Container a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:46:02.899963 containerd[1521]: time="2025-09-11T23:46:02.899914906Z" level=info msg="CreateContainer within sandbox \"b95fe7537c9d1c317554992d514d3f6d1d1b8d6d9f0784aef192c5029de76a5a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7\"" Sep 11 23:46:02.901529 containerd[1521]: time="2025-09-11T23:46:02.901428552Z" level=info msg="StartContainer for \"a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7\"" Sep 11 23:46:02.909502 containerd[1521]: time="2025-09-11T23:46:02.909460155Z" level=info msg="connecting to shim a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7" address="unix:///run/containerd/s/efe558397717be7853ab7128a29e2c02572e3af9c8f06b15ef3d355089dbd72c" protocol=ttrpc version=3 Sep 11 23:46:02.941117 systemd[1]: Started cri-containerd-a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7.scope - libcontainer container a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7. Sep 11 23:46:02.986860 sshd[4994]: Connection closed by 10.0.0.1 port 32972 Sep 11 23:46:02.987285 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:02.990207 containerd[1521]: time="2025-09-11T23:46:02.990166839Z" level=info msg="StartContainer for \"a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7\" returns successfully" Sep 11 23:46:02.993297 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:32972.service: Deactivated successfully. Sep 11 23:46:02.998714 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 23:46:03.002087 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Sep 11 23:46:03.003485 systemd-logind[1456]: Removed session 9. Sep 11 23:46:03.062923 kubelet[2649]: I0911 23:46:03.062631 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-879ns" podStartSLOduration=22.30133094 podStartE2EDuration="27.062610792s" podCreationTimestamp="2025-09-11 23:45:36 +0000 UTC" firstStartedPulling="2025-09-11 23:45:58.117164124 +0000 UTC m=+40.372309179" lastFinishedPulling="2025-09-11 23:46:02.878443936 +0000 UTC m=+45.133589031" observedRunningTime="2025-09-11 23:46:03.061654243 +0000 UTC m=+45.316799258" watchObservedRunningTime="2025-09-11 23:46:03.062610792 +0000 UTC m=+45.317755927" Sep 11 23:46:03.158932 containerd[1521]: time="2025-09-11T23:46:03.158865723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7\" id:\"6ca6f07a384a0a2c1b78eddb7280549f30ef8ca7b8724a05960f99bffe219714\" pid:5057 exit_status:1 exited_at:{seconds:1757634363 nanos:153426001}" Sep 11 23:46:04.167797 containerd[1521]: time="2025-09-11T23:46:04.167758304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7\" id:\"22893c7db87476c284ecda414c0217a1088386f3d36a37241ed7a37c5dc3fe11\" pid:5086 exit_status:1 exited_at:{seconds:1757634364 nanos:167010563}" Sep 11 23:46:04.950212 containerd[1521]: time="2025-09-11T23:46:04.950031914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:04.950735 containerd[1521]: time="2025-09-11T23:46:04.950500047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 11 23:46:04.951496 containerd[1521]: time="2025-09-11T23:46:04.951465515Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:04.954017 containerd[1521]: time="2025-09-11T23:46:04.953981668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:04.954666 containerd[1521]: time="2025-09-11T23:46:04.954529284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.075866702s" Sep 11 23:46:04.965513 containerd[1521]: time="2025-09-11T23:46:04.954564885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 11 23:46:04.966554 containerd[1521]: time="2025-09-11T23:46:04.966512472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 23:46:04.973523 containerd[1521]: time="2025-09-11T23:46:04.973056822Z" level=info msg="CreateContainer within sandbox \"34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 23:46:04.979588 containerd[1521]: time="2025-09-11T23:46:04.979549770Z" level=info msg="Container 0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:46:04.987349 containerd[1521]: time="2025-09-11T23:46:04.987264394Z" level=info msg="CreateContainer within sandbox \"34a31890eb29ef0cd69aca6b7c83c30fba2bfad231b7cba31f608bbd68452392\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f\"" Sep 11 23:46:04.988059 containerd[1521]: time="2025-09-11T23:46:04.987999375Z" level=info msg="StartContainer for \"0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f\"" Sep 11 23:46:04.989503 containerd[1521]: time="2025-09-11T23:46:04.989456217Z" level=info msg="connecting to shim 0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f" address="unix:///run/containerd/s/dbe9becd0fb74a62bbec155b8b3057ccb4064fe9f247180b5310e5395a870338" protocol=ttrpc version=3 Sep 11 23:46:05.013645 systemd[1]: Started cri-containerd-0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f.scope - libcontainer container 0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f. Sep 11 23:46:05.060028 containerd[1521]: time="2025-09-11T23:46:05.059964509Z" level=info msg="StartContainer for \"0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f\" returns successfully" Sep 11 23:46:06.080035 kubelet[2649]: I0911 23:46:06.079958 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74c68f96f4-6lq2g" podStartSLOduration=23.329238365 podStartE2EDuration="30.079924501s" podCreationTimestamp="2025-09-11 23:45:36 +0000 UTC" firstStartedPulling="2025-09-11 23:45:58.215662291 +0000 UTC m=+40.470807346" lastFinishedPulling="2025-09-11 23:46:04.966348427 +0000 UTC m=+47.221493482" observedRunningTime="2025-09-11 23:46:06.07918104 +0000 UTC m=+48.334326095" watchObservedRunningTime="2025-09-11 23:46:06.079924501 +0000 UTC m=+48.335069556" Sep 11 23:46:06.115463 containerd[1521]: time="2025-09-11T23:46:06.115422531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f\" id:\"1c3f259f05b0bd6e792609ec75f945179c46d79524a2f65d3c13ce0ef72aed03\" pid:5168 exited_at:{seconds:1757634366 nanos:114372582}" Sep 11 23:46:06.173985 containerd[1521]: time="2025-09-11T23:46:06.173647595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:06.174237 containerd[1521]: time="2025-09-11T23:46:06.174201650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 11 23:46:06.175109 containerd[1521]: time="2025-09-11T23:46:06.175076514Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:06.177758 containerd[1521]: time="2025-09-11T23:46:06.177615945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:46:06.178823 containerd[1521]: time="2025-09-11T23:46:06.178293044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.211733771s" Sep 11 23:46:06.178823 containerd[1521]: time="2025-09-11T23:46:06.178360366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 11 23:46:06.182486 containerd[1521]: time="2025-09-11T23:46:06.182403519Z" level=info msg="CreateContainer within sandbox \"ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 23:46:06.190295 containerd[1521]: time="2025-09-11T23:46:06.190234577Z" level=info msg="Container 0e151c224cc31610c514f6c54ff4d49bfdd7f6c1360628dc53cd439d20a8626a: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:46:06.199315 containerd[1521]: time="2025-09-11T23:46:06.199240468Z" level=info msg="CreateContainer within sandbox \"ad2f541e6141f375ebf46f52515248eb3675b55902971a18197e4cdedcec7434\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0e151c224cc31610c514f6c54ff4d49bfdd7f6c1360628dc53cd439d20a8626a\"" Sep 11 23:46:06.200363 containerd[1521]: time="2025-09-11T23:46:06.200153334Z" level=info msg="StartContainer for \"0e151c224cc31610c514f6c54ff4d49bfdd7f6c1360628dc53cd439d20a8626a\"" Sep 11 23:46:06.201586 containerd[1521]: time="2025-09-11T23:46:06.201554213Z" level=info msg="connecting to shim 0e151c224cc31610c514f6c54ff4d49bfdd7f6c1360628dc53cd439d20a8626a" address="unix:///run/containerd/s/9a5ddffcf810815649d8ee900daff892b6913fb16e4577deb25fb087b8b44cc8" protocol=ttrpc version=3 Sep 11 23:46:06.223054 systemd[1]: Started cri-containerd-0e151c224cc31610c514f6c54ff4d49bfdd7f6c1360628dc53cd439d20a8626a.scope - libcontainer container 0e151c224cc31610c514f6c54ff4d49bfdd7f6c1360628dc53cd439d20a8626a. Sep 11 23:46:06.263096 containerd[1521]: time="2025-09-11T23:46:06.263033487Z" level=info msg="StartContainer for \"0e151c224cc31610c514f6c54ff4d49bfdd7f6c1360628dc53cd439d20a8626a\" returns successfully" Sep 11 23:46:06.906727 kubelet[2649]: I0911 23:46:06.906678 2649 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 23:46:06.911054 kubelet[2649]: I0911 23:46:06.911008 2649 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 23:46:07.090551 kubelet[2649]: I0911 23:46:07.090473 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n2v2z" podStartSLOduration=22.129719306 podStartE2EDuration="31.090456275s" podCreationTimestamp="2025-09-11 23:45:36 +0000 UTC" firstStartedPulling="2025-09-11 23:45:57.218496461 +0000 UTC m=+39.473641516" lastFinishedPulling="2025-09-11 23:46:06.17923343 +0000 UTC m=+48.434378485" observedRunningTime="2025-09-11 23:46:07.090197428 +0000 UTC m=+49.345342483" watchObservedRunningTime="2025-09-11 23:46:07.090456275 +0000 UTC m=+49.345601330" Sep 11 23:46:08.001400 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:32986.service - OpenSSH per-connection server daemon (10.0.0.1:32986). Sep 11 23:46:08.086394 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 32986 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:08.088521 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:08.094952 systemd-logind[1456]: New session 10 of user core. Sep 11 23:46:08.101039 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 23:46:08.314542 sshd[5223]: Connection closed by 10.0.0.1 port 32986 Sep 11 23:46:08.315142 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:08.327280 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:32986.service: Deactivated successfully. Sep 11 23:46:08.329015 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 23:46:08.329685 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Sep 11 23:46:08.332214 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:33000.service - OpenSSH per-connection server daemon (10.0.0.1:33000). Sep 11 23:46:08.333149 systemd-logind[1456]: Removed session 10. Sep 11 23:46:08.398087 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 33000 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:08.399575 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:08.403646 systemd-logind[1456]: New session 11 of user core. Sep 11 23:46:08.413052 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 23:46:08.592425 sshd[5243]: Connection closed by 10.0.0.1 port 33000 Sep 11 23:46:08.593384 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:08.603081 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:33000.service: Deactivated successfully. Sep 11 23:46:08.607828 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 23:46:08.609348 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Sep 11 23:46:08.611552 systemd-logind[1456]: Removed session 11. Sep 11 23:46:08.613168 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:33004.service - OpenSSH per-connection server daemon (10.0.0.1:33004). Sep 11 23:46:08.680679 sshd[5254]: Accepted publickey for core from 10.0.0.1 port 33004 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:08.682189 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:08.686370 systemd-logind[1456]: New session 12 of user core. Sep 11 23:46:08.697185 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 23:46:08.838548 sshd[5257]: Connection closed by 10.0.0.1 port 33004 Sep 11 23:46:08.839125 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:08.842762 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:33004.service: Deactivated successfully. Sep 11 23:46:08.845097 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 23:46:08.846259 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Sep 11 23:46:08.849635 systemd-logind[1456]: Removed session 12. Sep 11 23:46:13.853584 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:44490.service - OpenSSH per-connection server daemon (10.0.0.1:44490). Sep 11 23:46:13.923008 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 44490 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:13.924439 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:13.930198 systemd-logind[1456]: New session 13 of user core. Sep 11 23:46:13.943106 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 23:46:14.104104 sshd[5285]: Connection closed by 10.0.0.1 port 44490 Sep 11 23:46:14.104409 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:14.109718 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:44490.service: Deactivated successfully. Sep 11 23:46:14.111772 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 23:46:14.114185 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Sep 11 23:46:14.115809 systemd-logind[1456]: Removed session 13. Sep 11 23:46:19.120587 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:44504.service - OpenSSH per-connection server daemon (10.0.0.1:44504). Sep 11 23:46:19.186747 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 44504 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:19.188105 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:19.194008 systemd-logind[1456]: New session 14 of user core. Sep 11 23:46:19.201034 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 23:46:19.384597 sshd[5304]: Connection closed by 10.0.0.1 port 44504 Sep 11 23:46:19.384866 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:19.389330 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:44504.service: Deactivated successfully. Sep 11 23:46:19.391085 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 23:46:19.393527 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Sep 11 23:46:19.394768 systemd-logind[1456]: Removed session 14. Sep 11 23:46:19.796908 containerd[1521]: time="2025-09-11T23:46:19.796843906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f\" id:\"ba7b18c0f808ed26b6e7b03f68bde8c49642904cd9f0a2d55c6696f7922f91d9\" pid:5330 exited_at:{seconds:1757634379 nanos:796547979}" Sep 11 23:46:22.022042 containerd[1521]: time="2025-09-11T23:46:22.021997876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1a81868b1d9b680c6bb16203a63aaf8563cc85e36fee2f6b8058b3a530832d5\" id:\"4b6901294d955b762ee176a74bbffb136307c95c0f063010cd803922205b5e7f\" pid:5355 exited_at:{seconds:1757634382 nanos:21435983}" Sep 11 23:46:24.396038 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:59342.service - OpenSSH per-connection server daemon (10.0.0.1:59342). Sep 11 23:46:24.460106 sshd[5369]: Accepted publickey for core from 10.0.0.1 port 59342 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:24.461347 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:24.467198 systemd-logind[1456]: New session 15 of user core. Sep 11 23:46:24.477061 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 23:46:24.634521 sshd[5372]: Connection closed by 10.0.0.1 port 59342 Sep 11 23:46:24.634905 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:24.643805 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:59342.service: Deactivated successfully. Sep 11 23:46:24.645506 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 23:46:24.646348 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Sep 11 23:46:24.648859 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:59344.service - OpenSSH per-connection server daemon (10.0.0.1:59344). Sep 11 23:46:24.650215 systemd-logind[1456]: Removed session 15. Sep 11 23:46:24.706764 sshd[5385]: Accepted publickey for core from 10.0.0.1 port 59344 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:24.707992 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:24.711695 systemd-logind[1456]: New session 16 of user core. Sep 11 23:46:24.723048 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 23:46:24.918130 sshd[5388]: Connection closed by 10.0.0.1 port 59344 Sep 11 23:46:24.918924 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:24.931919 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:59344.service: Deactivated successfully. Sep 11 23:46:24.933505 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 23:46:24.934309 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Sep 11 23:46:24.936037 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). Sep 11 23:46:24.937017 systemd-logind[1456]: Removed session 16. Sep 11 23:46:25.012658 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:25.013824 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:25.017411 systemd-logind[1456]: New session 17 of user core. Sep 11 23:46:25.027085 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 23:46:25.635282 sshd[5403]: Connection closed by 10.0.0.1 port 59348 Sep 11 23:46:25.636106 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:25.644555 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:59348.service: Deactivated successfully. Sep 11 23:46:25.648943 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 23:46:25.652137 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Sep 11 23:46:25.654608 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:59354.service - OpenSSH per-connection server daemon (10.0.0.1:59354). Sep 11 23:46:25.656541 systemd-logind[1456]: Removed session 17. Sep 11 23:46:25.714094 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 59354 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:25.715525 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:25.722733 systemd-logind[1456]: New session 18 of user core. Sep 11 23:46:25.731041 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 23:46:25.997347 sshd[5428]: Connection closed by 10.0.0.1 port 59354 Sep 11 23:46:25.997716 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:26.010399 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:59354.service: Deactivated successfully. Sep 11 23:46:26.013714 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 23:46:26.016120 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Sep 11 23:46:26.018446 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:59362.service - OpenSSH per-connection server daemon (10.0.0.1:59362). Sep 11 23:46:26.022302 systemd-logind[1456]: Removed session 18. Sep 11 23:46:26.085821 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 59362 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:26.087523 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:26.095122 systemd-logind[1456]: New session 19 of user core. Sep 11 23:46:26.107117 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 23:46:26.231250 sshd[5443]: Connection closed by 10.0.0.1 port 59362 Sep 11 23:46:26.232080 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:26.235562 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:59362.service: Deactivated successfully. Sep 11 23:46:26.237896 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 23:46:26.238832 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Sep 11 23:46:26.240337 systemd-logind[1456]: Removed session 19. Sep 11 23:46:31.243210 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:48762.service - OpenSSH per-connection server daemon (10.0.0.1:48762). Sep 11 23:46:31.306665 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 48762 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:31.308004 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:31.311670 systemd-logind[1456]: New session 20 of user core. Sep 11 23:46:31.328038 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 23:46:31.477244 sshd[5462]: Connection closed by 10.0.0.1 port 48762 Sep 11 23:46:31.477598 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:31.481135 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:48762.service: Deactivated successfully. Sep 11 23:46:31.483216 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 23:46:31.484018 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Sep 11 23:46:31.485051 systemd-logind[1456]: Removed session 20. Sep 11 23:46:33.874109 kubelet[2649]: I0911 23:46:33.873745 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:46:33.939087 containerd[1521]: time="2025-09-11T23:46:33.939002825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7\" id:\"89947bb373d03b5c9478984ecb180974447c34e891eb62384482ed9222f974ed\" pid:5493 exited_at:{seconds:1757634393 nanos:938675427}" Sep 11 23:46:34.109087 containerd[1521]: time="2025-09-11T23:46:34.109043001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a98745f4b474319152feae1708a977e06da8e1595891840ee88825c9fc544fc7\" id:\"701c865d598bfd96a5d875c28eb8d35243ac4ae789072bc119c89205fdf3bf7f\" pid:5516 exited_at:{seconds:1757634394 nanos:108755602}" Sep 11 23:46:35.840603 kubelet[2649]: E0911 23:46:35.840567 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:46:36.096682 containerd[1521]: time="2025-09-11T23:46:36.096550068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ef3cf132cbe9177d6ad4c65c6e6d654a58721cbf067fb41019c9499917c323f\" id:\"b464cef332db5702e1d4d9c57227724fec44e76a2efd0c0cb5bd8a664ac71f50\" pid:5540 exited_at:{seconds:1757634396 nanos:96366909}" Sep 11 23:46:36.490136 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:48778.service - OpenSSH per-connection server daemon (10.0.0.1:48778). Sep 11 23:46:36.573163 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 48778 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:36.574989 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:36.579027 systemd-logind[1456]: New session 21 of user core. Sep 11 23:46:36.587037 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 23:46:36.815646 sshd[5554]: Connection closed by 10.0.0.1 port 48778 Sep 11 23:46:36.815728 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:36.821421 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:48778.service: Deactivated successfully. Sep 11 23:46:36.823106 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 23:46:36.823695 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Sep 11 23:46:36.824959 systemd-logind[1456]: Removed session 21. Sep 11 23:46:37.813471 kubelet[2649]: I0911 23:46:37.813431 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:46:41.825362 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:39698.service - OpenSSH per-connection server daemon (10.0.0.1:39698). Sep 11 23:46:41.844910 kubelet[2649]: E0911 23:46:41.844726 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:46:41.878293 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 39698 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:46:41.879722 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:46:41.886787 systemd-logind[1456]: New session 22 of user core. Sep 11 23:46:41.897769 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 23:46:42.099939 sshd[5576]: Connection closed by 10.0.0.1 port 39698 Sep 11 23:46:42.100590 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Sep 11 23:46:42.105063 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:39698.service: Deactivated successfully. Sep 11 23:46:42.106841 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 23:46:42.107807 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Sep 11 23:46:42.108999 systemd-logind[1456]: Removed session 22. Sep 11 23:46:43.839616 kubelet[2649]: E0911 23:46:43.839153 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"